url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://mathhelpforum.com/advanced-algebra/182774-finite-p-group.html
# Thread: Finite p-group 1. ## Finite p-group This is an exercise from a textbook with many mistakes –not only typos – so I am not sure if actually holds. Let $G$ be a finite $p$-group ( $p$=prime). If $G/G’$ is cyclic then $G$ is cyclic too. ( $G’$ is the commutator subgroup or derived group of $G$). 2. Originally Posted by zoek This is an exercise from a textbook with many mistakes –not only typos – so I am not sure if actually holds. Let $G$ be a finite $p$-group ( $p$=prime). If $G/G’$ is cyclic then $G$ is cyclic too. ( $G’$ is the commutator subgroup or derived group of $G$). This is a very nice and non-trivial exercise, and we need to be sure we know/can prove the following: 1) A finite p-group is always nilpotent (hint: if $G$ is such a group, then $|Z(G)|>1$) ; 2) In a nilpotent group $G$ we always have that $G' the Frattini subgroup of $G$ (hint: for any maximal sbgp. $M\leq G\, , \, G/M$ is abelian (even cyclic of order a prime) and thus $G'\leq M$); 3) As $G/G'=$ , we get that $G=G'\Longrightarrow G=$ and we're done (hint: the Frattini sbgp. of a group is the set of all non-generators of the group. Apply now (2)) Tonio 3) 3. Originally Posted by tonio This is a very nice and non-trivial exercise, and we need to be sure we know/can prove the following: 1) A finite p-group is always nilpotent (hint: if $G$ is such a group, then $|Z(G)|>1$) ; 2) In a nilpotent group $G$ we always have that $G' the Frattini subgroup of $G$ (hint: for any maximal sbgp. $M\leq G\, , \, G/M$ is abelian (even cyclic of order a prime) and thus $G'\leq M$); 3) As $G/G'=$ , we get that $G=G'\Longrightarrow G=$ and we're done (hint: the Frattini sbgp. of a group is the set of all non-generators of the group. Apply now (2)) Tonio 3) Is it true that $Z(G) \leq G^{\prime}$? Because that would make the question much easier! (Also, Tonio's approach is very similar to something called 'The Burnside Basis Theorem', which says that G/Frat(G) is abelian, and G is generated by n-elements, where G/Frat(G) is n-generated. See Robinson, A Course in the Theory of Groups). 4. Thank you both! I had not studied about Frattini subgroup, until today. I shall do it now! 5. Originally Posted by Swlabr Is it true that $Z(G) \leq G^{\prime}$? Because that would make the question much easier! No, that's not true in general. For example, if $G$ is abelian, that certainly fails (though I suppose this would make the whole problem very easy... but still.). 6. Originally Posted by topspin1617 No, that's not true in general. For example, if $G$ is abelian, that certainly fails (though I suppose this would make the whole problem very easy... but still.). Sorry-that was meant to be $G^{\prime} \leq Z(G)$, as then $G/G^{\prime}\rightarrow G/Z(G)$ surjectively, and so $G/Z(G)$ is cyclic. It is true that if $G/Z(G)$ is cyclic then $G$ is abelian, which isn't too hard to prove. However, this ( $G^{\prime} \leq Z(G)$) does not hold in general either. It holds if and only if $G$ has nilpotency class at most 2 (because if $G^{\prime}\leq Z(G)$ then $[[G, G], G]\leq [Z(G), G]=1$, and the converse is similar). So, we want to prove that $G$ has nilpotency class at most two...however, this seems to be harder than I initially thought...but I am still uneasy about Tonio's use of the Frattini subgroup. I mean, if the OP hadn't read the section on that yet, then there must be another proof! 7. i don't think that approach will work. for example D16 has center {1,r^4} but [D16,D16] = <r^2>, which is cyclic of order 4. (of course D16 has nilpotency class 3. so i suppose what we want to show is that G/[G,G] cyclic --> G is of nilpotency class 2. this isn't obvious. perhaps one could get around using the Frattini subgroup by showing [G,G] is NOT maximal, and using induction). 8. Originally Posted by Deveno i don't think that approach will work. for example D16 has center {1,r^4} but [D16,D16] = <r^2>, which is cyclic of order 4. Yes, but the method is looking for a contradiction...so you can't give a real-life counter-example! (Also, $D_{16}$ has derived length 3, not 2...you'd need $D_8$ for that, I believe...) 9. ## Re: Finite p-group Originally Posted by zoek Let $G$ be a finite $p$-group ( $p$=prime). If $G/G’$ is cyclic then $G$ is cyclic too. ( $G’$ is the commutator subgroup or derived group of $G$). I think that finally I managed to solve this exercise: 1. $G$ finite $p$-group $\Rightarrow G$ nilpotent 2. $G$ nilpotent $\Rightarrow G’ \leq \Phi (G)$ 3. $G/G’$cyclic $\Rightarrow G/\Phi (G)$ cyclic $\overset {G fin. p-grp }{\Longrightarrow }$ $G$ cyclic. Originally Posted by Swlabr I mean, if the OP hadn't read the section on that yet, then there must be another proof! It is only in my textbook that this exercise is located before Frattini subgroup and nilpotent groups (but the solution of this is totally incorrect). Everywhere else I found something about this (Rotman, Robinson and internet) it was after nilpotent groups and Frattini subgroup.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 64, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9251766204833984, "perplexity": 595.5416273748789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174157.36/warc/CC-MAIN-20170219104614-00142-ip-10-171-10-108.ec2.internal.warc.gz"}
http://oomph-lib.maths.man.ac.uk/doc/helmholtz/scattering/html/index.html
Example problem: The Helmholtz equation -- scattering problems In this document we discuss the finite-element-based solution of the Helmholtz equation, an elliptic PDE that describes time-harmonic wave propagation problems. We start by reviewing the relevant theory and then present the solution of a simple model problem – the scattering of a planar wave from a circular cylinder. Acknowledgement: This tutorial and the associated driver codes were developed jointly with Tarak Kharrat (EnstaParisTech, Paris). # Theory: The Helmholtz equation for time-harmonic scattering problems The Helmholtz equation governs time-harmonic solutions of problems governed by the linear wave equation where is the wavespeed. Assuming that is time-harmonic, with frequency , we write the real function as where is complex-valued. This transforms (1) into the Helmholtz equation where is the wave number. Like other elliptic PDEs the Helmholtz equation admits Dirichlet, Neumann (flux) and Robin boundary conditions. If the equation is solved in an infinite domain (e.g. in scattering problems) the solution must satisfy the so-called Sommerfeld radiation condition which in 2D has the form Mathematically, this conditions is required to ensure the uniqueness of the solution (and hence the well-posedness of the problem). In a physical context, such as a scattering problem, the condition ensures that scattering of an incoming wave only produces outgoing not incoming waves from infinity. # Discretisation by finite elements The discretisation of the Helmholtz equation itself only requires a trivial modification of oomph-lib's Poisson elements – we simply add the term to the residual. Since most practical applications of the Helmholtz equation involve complex-valued solutions, we provide separate storage for the real and imaginary parts of the solution – each Node therefore stores two unknowns values. By default, the real and imaginary parts are stored as values 0 and 1, respectively; see the section The enumeration of the unknowns for details. The application of Dirichlet and Neumann boundary conditions is straightforward and follows the pattern employed for the solution of the Poisson equation: • Dirichlet conditions are imposed by pinning the relevant nodal values and setting them to the appropriate prescribed values. • Neumann (flux) boundary conditions are imposed via FaceElements (here the HelmholtzFluxElements). As usual we attach these to the faces of the "bulk" elements that are subject to the Neumann boundary conditions. The imposition of the Sommerfeld radiation condition for problems in infinite domains is slightly more complicated. In the following discussion we will restrict ourselves to two dimensions and assume that the infinite domain is truncated at a circular artificial boundary of radius [This assumption is also made in the implementation of oomph-lib's FaceElements that allow the (approximate) imposition of the Sommerfeld radiation condition. The methodology can easily be modified to deal with other geometries but this has not been done yet – any volunteers?] All methods exploit the fact that the relevant solution of the Helmholtz equation can be written in polar coordinates as where the are suitable coefficients and is the -th-order Hankel function of the first kind. ## Approximate/absorbing boundary conditions (ABCs) It is possible to derive approximate versions of the Sommerfeld radiation condition in which the normal derivative of the solution on the artificial boundary is related to its value and possibly its tangential derivatives. Such boundary conditions (sometimes referred to as approximate or absorbing boundary conditions – ABCs) are typically derived from asymptotic expansions of the solution at large distances from the origin and become more accurate the larger the radius of the artificial boundary is. Higher accuracy can therefore only be achieved by increasing the size of the computational domain, with an associated increase in computational cost. oomph-lib provides an implementation of the following three boundary conditions (all taken from J. J. Shirron & I. Babuska's paper "A comparison of approximate boundary conditions and infinite element methods for exterior Helmholtz problems", Computer Methods in Applied Mechanics and Engineering 164 121-139 (1998), in which the authors compare the accuracy of these and many other approximate boundary conditions). • Feng's first order ABC: (This is identical to the first-order Bayliss and Turkel boundary condition). • Feng's second order ABC: • Feng's third order ABC: All three boundary conditions are implemented in the class HelmholtzAbsorbingBCElement. The order of the approximation can be set via the member function HelmholtzAbsorbingBCElement::abc_order(). All three boundary conditions are local (relating the function to its normal derivative) and do therefore not change the sparsity of the resulting finite element equations. ## The Dirichlet-to-Neumann mapping (DtN) Using (4), it is easy to show (see, e.g., J. Jin "The Finite Element Method in Electromagnetics (second edition)", Wiley (2002) p. 501ff – but note that Jin assumes that the potential varies like rather than as assumed here) that the normal (radial) derivative, on the artificial boundary is given by where Equation (5) again provides a condition on the normal derivative of the solution along the artificial boundary and is implemented in the HelmholtzDtNBoundaryElement class. Since depends on the solution everywhere along the artificial boundary (see (6)), the application of the boundary condition (5) introduces a non-local coupling between all the degrees of freedom located on that boundary. This is handled by classifying the unknowns that affect but are not associated with the element's own nodes as external Data. To facilitate the setup of the interaction between the HelmholtzDtNBoundaryElements, oomph-lib provides the class HelmholtzDtNMesh which provides storage for (the pointers to) the HelmholtzDtNBoundaryElements that discretise the artificial boundary. The member function HelmholtzDtNMesh::setup_gamma() pre-computes the values required for the imposition of equation (5). The radius of the artificial boundary and the (finite) number of (Fourier) terms used in the sum in (6) are specified as arguments to the constructor of the HelmholtzDtNMesh. NOTE: Since depends on the solution, it must be recomputed whenever the unknowns are updated during the Newton iteration. This is best done by adding a call to HelmholtzDtNMesh::setup_gamma() to Problem::actions_before_newton_convergence_check(). [If Helmholtz's equation is solved in isolation (or within a coupled, but linear problem), Newton's method will converge in one iteration. In such cases the unnecessary recomputation of after the one-and-only Newton iteration can be suppressed by setting Problem::Problem_is_nonlinear to false.] # A specific example: Scattering of an acoustic wave from a sound-hard obstacle We will now demonstrate the methodology for a specific example: the scattering of sound waves in an acoustic medium of density and bulk modulus . Assuming that an incoming sound wave impacts a rigid, impermeable obstacle as shown in this sketch, Scattering of an incoming wave from a sound-hard obstacle -- the scatterer. we wish to find the wave field that is scattered from the body. For this purpose we denote the time-dependent displacement of the fluid particle in the acoustic medium by and introduce a displacement potential such that (As usual we employ asterisks to distinguish dimensional quantities from their non-dimensional equivalents, to be introduced below.) It is easy to show that satisfies the linear wave equation (1) with wave speed . Since the surface of the scatterer is impenetrable, the normal displacement of the fluid has to vanish on and the boundary condition for the displacement potential becomes We non-dimensionalise all lengths and displacements on some problem-dependent lengthscale (e.g. the radius of the scatterer), non-dimensionalise the potential as and scale time on the period of the oscillation, The governing equation then becomes where the square of the wavenumber is given by Assuming that the incoming wave (already satisfying (8)) is described by a (known) non-dimensional displacement potential of the form we write the total potential as where represents the displacement potential associated with the scattered field which must satisfy (2). The boundary condition (7) then becomes a Neumann (flux) boundary condition for the scattered field, For the special case of the incoming wave being a planar wave, propagating along the x-axis, the incoming field can be written in polar coordinates as where is the Bessel function of the first kind of order . The exact solution for the scattering of such a wave from a circular disk is given by the series where we have chosen the disk's radius, , as the lengthscale by setting . In the above expression, denotes the Hankel function of the first kind of order and the prime denotes differentiation with respect to the function's argument. A quantity that is of particular interest in wave propagation problems is the time-average of the power radiated by the scatterer, In the context of an acoustic wave, the total instantaneous power, radiated over a closed boundary is where the pressure is related to the displacement potential via The non-dimensional time-averaged radiated power can be expressed in terms of the complex potential as # Results The figure below shows an animation of the displacement potential for scattering from a circular disk for a non-dimensional wavenumber of over one period of the oscillation. The simulation was performed in an annular computational domain, bounded by the outer surface the (unit) disk and an artificial outer boundary of non-dimensional radius The Sommerfeld radiation condition was imposed using the DtN mapping and the simulation was performed with spatial adaptivity (note the non-uniform refinement). The "carpet plot" compares the exact (green) and computed (red) solutions for the displacement potential. The colours in the contour plot at the bottom of the figure provide an alternative visualisation of the magnitude of the scattered field. The displacement potential associated with the scattered wave, animated over one period of the oscillation. # The numerical solution ## The global namespace As usual, we define the problem parameters in a global namespace. The main physical parameter is the (square of the) wave number, . N_fourier is the number of (Fourier) terms to be used in evaluation of the series in equations (6) and (10). The remaining parameters determine how the Sommerfeld radiation condition is applied. //===== start_of_namespace============================================= /// Namespace for the Helmholtz problem parameters //===================================================================== namespace GlobalParameters { /// \short Square of the wavenumber double K_squared=10.0; /// \short Number of terms used in the computation /// of the exact solution unsigned N_fourier=10; /// \short Flag to choose the Dirichlet to Neumann BC /// or ABC BC bool DtN_BC=false; /// \short Flag to choose wich order to use // in the ABCs BC: 1 for ABC 1st order... unsigned ABC_order=3; /// Radius of outer boundary (must be a circle!) /// Imaginary unit std::complex<double> I(0.0,1.0); The function get_exact_u returns the exact solution for the scattering problem. We will use this function for the validation of our results. /// \short Exact solution for scattered field /// (vector returns real and impaginary parts). void get_exact_u(const Vector<double>& x, Vector<double>& u) { // Switch to polar coordinates double r; r=sqrt(x[0]*x[0]+x[1]*x[1]); double theta; theta=atan2(x[1],x[0]); // Argument for Bessel/Hankel functions double rr=sqrt(K_squared)*r; // Evaluate Bessel/Hankel functions complex <double > u_ex(0.0,0.0); Vector<double> jn(N_fourier+1), yn(N_fourier+1), jnp(N_fourier+1), ynp(N_fourier+1); Vector<double> jn_a(N_fourier+1),yn_a(N_fourier+1), jnp_a(N_fourier+1), ynp_a(N_fourier+1); Vector<complex<double> > h(N_fourier+1),h_a(N_fourier+1), hp(N_fourier+1), hp_a(N_fourier+1); // We want to compute N_fourier terms but the function // may return fewer than that. int n_actual=0; CRBond_Bessel::bessjyna(N_fourier,sqrt(K_squared),n_actual, &jn_a[0],&yn_a[0], &jnp_a[0],&ynp_a[0]); // Shout if things went wrong #ifdef PARANOID if (n_actual!=int(N_fourier)) { std::ostringstream error_stream; error_stream << "CRBond_Bessel::bessjyna() only computed " << n_actual << " rather than " << N_fourier << " Bessel functions.\n"; throw OomphLibError(error_stream.str(), OOMPH_CURRENT_FUNCTION, OOMPH_EXCEPTION_LOCATION); } #endif // Evaluate Hankel at actual radius Hankel_functions_for_helmholtz_problem::Hankel_first(N_fourier,rr,h,hp); // Evaluate Hankel at inner (unit) radius Hankel_functions_for_helmholtz_problem::Hankel_first(N_fourier ,sqrt(K_squared), h_a,hp_a); // Compute the sum: Separate the computation of the negative // and positive terms for (unsigned i=0;i<N_fourier;i++) { u_ex-=pow(I,i)*h[i]*((jnp_a[i])/hp_a[i])*pow(exp(I*theta),i); } for (unsigned i=1;i<N_fourier;i++) { u_ex-=pow(I,i)*h[i]*((jnp_a[i])/hp_a[i])*pow(exp(-I*theta),i); } // Get the real & imaginary part of the result u[0]=real(u_ex); u[1]=imag(u_ex); }// end of get_exact_u Next we provide a function that computes the prescribed flux (normal derivative) of the solution, , evaluated on the surface of the unit disk. /// \short Flux (normal derivative) on the unit disk /// for a planar incoming wave void prescribed_incoming_flux(const Vector<double>& x, complex<double>& flux) { // Switch to polar coordinates double r; r=sqrt(x[0]*x[0]+x[1]*x[1]); double theta; theta=atan2(x[1],x[0]); // Argument of the Bessel/Hankel fcts double rr=sqrt(K_squared)*r; // Compute Bessel/Hankel functions Vector<double> jn(N_fourier+1), yn(N_fourier+1), jnp(N_fourier+1), ynp(N_fourier+1); // We want to compute N_fourier terms but the function // may return fewer than that. int n_actual=0; CRBond_Bessel::bessjyna(N_fourier,rr,n_actual,&jn[0],&yn[0], &jnp[0],&ynp[0]); // Shout if things went wrong... #ifdef PARANOID if (n_actual!=int(N_fourier)) { std::ostringstream error_stream; error_stream << "CRBond_Bessel::bessjyna() only computed " << n_actual << " rather than " << N_fourier << " Bessel functions.\n"; throw OomphLibError(error_stream.str(), OOMPH_CURRENT_FUNCTION, OOMPH_EXCEPTION_LOCATION); } #endif // Compute the sum: Separate the computation of the negative and // positive terms flux=std::complex<double>(0.0,0.0); for (unsigned i=0;i<N_fourier;i++) { flux+=pow(I,i)*(sqrt(K_squared))*pow(exp(I*theta),i)*jnp[i]; } for (unsigned i=1;i<N_fourier;i++) { flux+=pow(I,i)*(sqrt(K_squared))*pow(exp(-I*theta),i)*jnp[i]; } }// end of prescribed_incoming_flux } // end of namespace ## The driver code The driver code is very straightforward. We parse the command line to determine which boundary condition to use and set the flags in the global namespace accordingly. //==========start_of_main================================================= /// Solve 2D Helmholtz problem for scattering of a planar wave from a /// unit disk //======================================================================== int main(int argc, char **argv) { // Store command line arguments CommandLineArgs::setup(argc,argv); // Define case to be run unsigned i_case=0; CommandLineArgs::specify_command_line_flag("--case",&i_case); // Parse command line CommandLineArgs::parse_and_assign(); // Doc what has actually been specified on the command line CommandLineArgs::doc_specified_flags(); // Now set flags accordingly switch(i_case) { case 0: break; case 1: break; case 2: break; case 3: break; } Next we build the problem, either with or without enabling spatial adaptivity and define the output directory. //Set up the problem //------------------ //Set up the problem with 2D nine-node elements from the //QHelmholtzElement family. problem; #else //Set up the problem with 2D nine-node elements from the //QHelmholtzElement family. problem; #endif // Create label for output //------------------------ DocInfo doc_info; // Set output directory doc_info.set_directory("RESLT"); Finally, we solve the problem and document the results. // Solve the problem with Newton's method, allowing #else // Solve the problem with Newton's method problem.newton_solve(); #endif //Output solution problem.doc_solution(doc_info); } //end of main ## The problem class The problem class is very similar to that employed for the adaptive solution of the 2D Poisson equation with flux boundary conditions. The only difference is that we provide two separate meshes of FaceElements: one for the inner boundary where the HelmholtzFluxElements apply the Neumann condition (9), and one for the outer boundary where we apply the (approximate) Sommerfeld radiation condition. As discussed in section The Dirichlet-to-Neumann mapping (DtN) , we use the function actions_before_newton_convergence_check() to recompute the integral whenever the unknowns are updated during the Newton iteration. //========= start_of_problem_class===================================== /// Problem class to compute scattering of planar wave from unit disk //===================================================================== template<class ELEMENT> class ScatteringProblem : public Problem { public: /// Constructor /// Destructor (empty) /// \short Doc the solution. DocInfo object stores flags/labels for where the /// output gets written to void doc_solution(DocInfo& doc_info); /// \short Update the problem specs before solve (empty) /// Update the problem specs after solve (empty) /// Recompute gamma integral before checking Newton residuals { { } } /// Actions before adapt: Wipe the mesh of prescribed flux elements /// Actions after adapt: Rebuild the mesh of prescribed flux elements /// \short Create BC elements on boundary b of the Mesh pointed /// to by bulk_mesh_pt and add them to the specified survace Mesh const unsigned &b, Mesh* const &bulk_mesh_pt, Mesh* const & helmholtz_outer_boundary_mesh_pt); /// \short Create Helmholtz flux elements on boundary b of the Mesh pointed /// to by bulk_mesh_pt and add them to the specified surface Mesh void create_flux_elements(const unsigned &b, Mesh* const &bulk_mesh_pt, Mesh* const & helmholtz_inner_boundary_mesh_pt); /// \short Delete boundary face elements and wipe the surface mesh void delete_face_elements( Mesh* const & boundary_mesh_pt); /// \short Set pointer to prescribed-flux function for all /// elements in the surface mesh on the surface of the unit disk /// \short Set up boundary condition elements on outer boundary /// Pointer to the "bulk" mesh RefineableTwoDAnnularMesh<ELEMENT>* Bulk_mesh_pt; #else /// Pointer to the "bulk" mesh TwoDAnnularMesh<ELEMENT>* Bulk_mesh_pt; #endif /// \short Pointer to mesh containing the DtN (or ABC) boundary /// condition elements HelmholtzDtNMesh<ELEMENT>* Helmholtz_outer_boundary_mesh_pt; /// \short Pointer to the mesh containing /// the Helmholtz inner boundary condition elements }; // end of problem class ## The problem constructor We start by building the bulk mesh, using the refineable or non-refineable version of the TwoDAnnularMesh, depending on the macro ADAPTIVE. (The error tolerances for the adaptive version are chosen such that the mesh is refined non-uniformly – with the default tolerances, oomph-lib's automatic mesh adaptation procedure refine the mesh uniformly.) //=======start_of_constructor============================================= /// Constructor for Helmholtz problem //======================================================================== template<class ELEMENT> { // Setup "bulk" mesh // # of elements in theta unsigned n_theta=15; // # of elements in radius unsigned n_r=5; double a=1.0; // Thickness of annular computational domain double h=0.5; // Mesh is periodic bool periodic=true; // Full circle double azimuthal_fraction=1.0; // Build "bulk" mesh Bulk_mesh_pt= new RefineableTwoDAnnularMesh<ELEMENT>(periodic, azimuthal_fraction,n_theta,n_r,a,h); // Create/set error estimator Bulk_mesh_pt->spatial_error_estimator_pt()=new Z2ErrorEstimator; // Choose error tolerances to force some uniform refinement Bulk_mesh_pt->min_permitted_error()=0.004; Bulk_mesh_pt->max_permitted_error()=0.01; #else // Build "bulk" mesh Bulk_mesh_pt= new TwoDAnnularMesh<ELEMENT>(periodic, azimuthal_fraction,n_theta,n_r,a,h); #endif Next we create the two (empty) meshes for the FaceElements, // Pointer to mesh containing the Helmholtz outer boundary condition // elements. Specify outer radius and number of Fourier terms to be // used in gamma integral Helmholtz_outer_boundary_mesh_pt = new HelmholtzDtNMesh<ELEMENT>(a+h,GlobalParameters::N_fourier); // Pointer to mesh containing the Helmholtz inner boundary condition Helmholtz_inner_boundary_mesh_pt = new Mesh; and populate them using the functions create_flux_elements(...) and create_outer_bc_elements(...). // Create prescribed-flux elements from all elements that are // adjacent to the inner boundary , but add them to a separate mesh. create_flux_elements(0,Bulk_mesh_pt,Helmholtz_inner_boundary_mesh_pt); // Create outer boundary elements from all elements that are // adjacent to the outer boundary , but add them to a separate mesh. create_outer_bc_elements(2,Bulk_mesh_pt,Helmholtz_outer_boundary_mesh_pt); We add the various (sub-)meshes to the problem and build the global mesh // Add the several sub meshes to the problem // Build the Problem's global mesh from its various sub-meshes build_global_mesh(); Finally, we complete the build of the various elements by by passing pointers to the relevant quantities to them, and assign the equation numbers. // Complete the build of all elements so they are fully functional // Loop over the Helmholtz bulk elements to set up element-specific // things that cannot be handled by constructor: Pass pointer to // wave number squared unsigned n_element = Bulk_mesh_pt->nelement(); for(unsigned e=0;e<n_element;e++) { // Upcast from GeneralisedElement to Helmholtz bulk element ELEMENT *el_pt = dynamic_cast<ELEMENT*>(Bulk_mesh_pt->element_pt(e)); //Set the k_squared pointer el_pt->k_squared_pt() = &GlobalParameters::K_squared; } // Set up elements on outer boundary setup_outer_boundary(); // Set pointer to prescribed flux function for flux elements set_prescribed_incoming_flux_pt(); // Setup equation numbering scheme cout <<"Number of equations: " << assign_eqn_numbers() << std::endl; } // end of constructor The problem is now ready to be solved. The mesh adaptation is driven by the error estimates for the bulk elements. The various FaceElements must therefore be removed from the global mesh before the adaptation takes place. We do this by calling the function delete_flux_elements(...) for the two face meshes, before rebuilding the Problem's global mesh. /// Actions before adapt: Wipe the mesh of face elements //======================================================================== template<class ELEMENT> { // Kill the flux elements and wipe the boundary meshs delete_face_elements(Helmholtz_outer_boundary_mesh_pt); delete_face_elements(Helmholtz_inner_boundary_mesh_pt); // Rebuild the Problem's global mesh from its various sub-meshes rebuild_global_mesh(); After the (bulk-)mesh has been adapted, the flux elements must be re-attached. This is done by calling the functions create_flux_elements(...) and create_outer_bc_elements, followed by rebuilding the Problem's global mesh. Finally, we complete the build of the FaceElements by calling the functions setup_outer_boundary() and set_prescribed_incoming_flux_pt(). /// Actions after adapt: Rebuild the face element meshes //======================================================================== template<class ELEMENT> { // Create prescribed-flux elements and BC elements // from all elements that are adjacent to the boundaries and add them to // Helmholtz_boundary_meshes create_outer_bc_elements(2,Bulk_mesh_pt,Helmholtz_outer_boundary_mesh_pt); create_flux_elements(0,Bulk_mesh_pt,Helmholtz_inner_boundary_mesh_pt); // Rebuild the Problem's global mesh from its various sub-meshes rebuild_global_mesh(); // Set pointer to prescribed flux function and DtN mesh setup_outer_boundary(); set_prescribed_incoming_flux_pt(); ## Delete flux elements The helper function delete_face_elements() is used to delete all FaceElements in a given surface mesh before the mesh adaptation. //============start_of_delete_face_elements================ /// Delete face elements and wipe the boundary mesh //========================================================== template<class ELEMENT> delete_face_elements(Mesh* const & boundary_mesh_pt) { // Loop over the surface elements unsigned n_element = boundary_mesh_pt->nelement(); for(unsigned e=0;e<n_element;e++) { // Kill surface element delete boundary_mesh_pt->element_pt(e); } // Wipe the mesh boundary_mesh_pt->flush_element_and_node_storage(); } // end of delete_outer_face_elements ## Creating the face elements The functions create_flux_elements(...) and create_outer_bc_elements(...) create the FaceElements required to apply the boundary conditions on the inner and outer boundaries of the annular computational domain. They both loop over the bulk elements that are adjacent to the appropriate mesh boundary and attach the required FaceElements to their faces. The newly created FaceElements are then added to the appropriate mesh. //============start_of_create_outer_bc_elements============================== /// Create outer BC elements on the b-th boundary of /// the Mesh object pointed to by bulk_mesh_pt and add the elements /// to the Mesh object pointed to by helmholtz_outer_boundary_mesh_pt. //=========================================================================== template<class ELEMENT> create_outer_bc_elements(const unsigned &b, Mesh* const &bulk_mesh_pt, Mesh* const & helmholtz_outer_boundary_mesh_pt) { // Loop over the bulk elements adjacent to boundary b? unsigned n_element = bulk_mesh_pt->nboundary_element(b); for(unsigned e=0;e<n_element;e++) { // Get pointer to the bulk element that is adjacent to boundary b ELEMENT* bulk_elem_pt = dynamic_cast<ELEMENT*>( bulk_mesh_pt->boundary_element_pt(b,e)); //Find the index of the face of element e along boundary b int face_index = bulk_mesh_pt->face_index_at_boundary(b,e); // Build the corresponding outer flux element // Dirichlet to Neumann boundary conditon { HelmholtzDtNBoundaryElement<ELEMENT>* flux_element_pt = new HelmholtzDtNBoundaryElement<ELEMENT>(bulk_elem_pt,face_index); //Add the flux boundary element to the helmholtz_outer_boundary_mesh } // ABCs BC else { HelmholtzAbsorbingBCElement<ELEMENT>* flux_element_pt = new HelmholtzAbsorbingBCElement<ELEMENT>(bulk_elem_pt,face_index); //Add the flux boundary element to the helmholtz_outer_boundary_mesh } } //end of loop over bulk elements adjacent to boundary b } // end of create_outer_bc_elements (We omit the listing of the function create_flux_elements(...) because it is very similar. Feel free to inspect in the source code.) ## Post-processing The post-processing function doc_solution(...) computes and outputs the total radiated power, and plots the computed and exact solutions (real and complex parts). //=====================start_of_doc======================================= /// Doc the solution: doc_info contains labels/output directory etc. //======================================================================== template<class ELEMENT> doc_info) { ofstream some_file,some_file2; char filename[100]; // Number of plot points unsigned npts; npts=5; //---------------------------------- sprintf(filename,"%s/power%i.dat",doc_info.directory().c_str(), doc_info.number()); some_file.open(filename); // Accumulate contribution from elements double power=0.0; unsigned nn_element=Helmholtz_outer_boundary_mesh_pt->nelement(); for(unsigned e=0;e<nn_element;e++) { HelmholtzBCElementBase<ELEMENT> *el_pt = dynamic_cast< HelmholtzBCElementBase<ELEMENT>*>( Helmholtz_outer_boundary_mesh_pt->element_pt(e)); power += el_pt->global_power_contribution(some_file); } some_file.close(); oomph_info << "Total radiated power: " << power << std::endl; // Output solution //----------------- sprintf(filename,"%s/soln%i.dat",doc_info.directory().c_str(), doc_info.number()); some_file.open(filename); Bulk_mesh_pt->output(some_file,npts); some_file.close(); // Output exact solution //---------------------- sprintf(filename,"%s/exact_soln%i.dat",doc_info.directory().c_str(), doc_info.number()); some_file.open(filename); Bulk_mesh_pt->output_fct(some_file,npts,GlobalParameters::get_exact_u); some_file.close(); double error,norm; sprintf(filename,"%s/error%i.dat",doc_info.directory().c_str(), doc_info.number()); some_file.open(filename); Bulk_mesh_pt->compute_error(some_file,GlobalParameters::get_exact_u, error,norm); some_file.close(); // Doc L2 error and norm of solution oomph_info << "\nNorm of error : " << sqrt(error) << std::endl; oomph_info << "Norm of solution: " << sqrt(norm) << std::endl << std::endl; Finally, we create the data required to produce an animation of the actual (real) potential at 40 instants during a period of the oscillation. // Do animation of Helmholtz solution //----------------------------------- unsigned nstep=40; for (unsigned i=0;i<nstep;i++) { sprintf(filename,"%s/helmholtz_animation%i_frame%i.dat", doc_info.directory().c_str(), doc_info.number(),i); some_file.open(filename); sprintf(filename,"%s/exact_helmholtz_animation%i_frame%i.dat", doc_info.directory().c_str(), doc_info.number(),i); some_file2.open(filename); double phi=2.0*MathematicalConstants::Pi*double(i)/double(nstep-1); unsigned nelem=Bulk_mesh_pt->nelement(); for (unsigned e=0;e<nelem;e++) { ELEMENT* el_pt=dynamic_cast<ELEMENT*>( Bulk_mesh_pt->element_pt(e)); el_pt->output_real(some_file,phi,npts); el_pt->output_real_fct(some_file2,phi,npts, } some_file.close(); some_file2.close(); } } // end of doc ## The enumeration of the unknowns As discussed in the introduction, most practically relevant solutions of the Helmholtz equation are complex valued. Since oomph-lib's solvers only deal with real (double precision) unknowns, the equations are separated into their real and imaginary parts. In the implementation of the Helmholtz elements, we store the real and imaginary parts of the solution as two separate values at each node. By default, the real and imaginary parts are accessible via Node::value(0) and Node::value(1). However, to facilitate the use of the elements in multi-physics problems we avoid accessing the unknowns directly in this manner but provide the virtual function std::complex<unsigned> HelmholtzEquations<DIM>::u_index_helmholtz() which returns a complex number made of the two unsigneds that indicate which nodal value represents the real and imaginary parts of the solution. This function may be overloaded in combined multi-physics elements in which a Helmholtz element is combined (by multiple inheritance) with another element, using the strategy described in the Boussinesq convection tutorial. ## Exercises ### Exploiting linearity Confirm that the (costly) re-computation of the integral in actions_before_newton_convergence_check() after the first (and only) linear solve in the Newton iteration can be avoided by declaring the problem to be linear. ### The accuracy of the boundary condition elements Explore the accuracy (and computational cost) of the various FaceElements that apply the Sommmerfeld radiation condition. In particular, confirm that the accuracy of the DtN boundary condition is (nearly) independent of the radius of the artificial outer boundary, whereas the accuracy of the ABC boundary condition can only be improved by increasing the size of the computational domain.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605428338050842, "perplexity": 2394.0511329975434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691977.66/warc/CC-MAIN-20170925145232-20170925165232-00175.warc.gz"}
http://physics.stackexchange.com/questions/41160/jumping-on-earth-versus-jumping-on-the-moon
# Jumping on earth versus jumping on the moon Given the following problem: On the moon the acceleration due to gravity is $g_m = 1.62 m/s^2$. On earth, a person of mass $m = 80 kg$ manages to jump $1.4 m$. Find the height this person will reach when jumping on the moon, if the person is wearing a spacesuit with mass $m = 124 kg$. I am a little bit confused as to whether or not the given information regarding mass is actually needed here at all. Assume that $v_0$ is equal in both jumps, and there is no rotational movement, can't we just use the formula for conservation of mechanical energy? $$\frac{1}{2}m v_{f}^2 + mgh_f = \frac{1}{2}m v_{0}^2 + mgh_0$$ And here we can cancel the mass, $m$ since it appears in all terms. So on earth we will have: $$gh_f = \frac{1}{2} v_{0}^2$$ $$9.8 \cdot 1.4 = \frac{1}{2} v_{0}^2$$ $$v_0 = 5.2 m/s$$ Then on the moon, since we know $v_0$, we can then find $h_f$: $$1.62 h_f = \frac{1}{2} \cdot (5.2)^2$$ $$h_f = 8.3 m$$ Would this not be an acceptable way to solve this? If this is wrong, can anyone please explain why this is wrong conceptually? - Your solution is correct, if you assume that $v_0$ is the same on both moon and earth. The reason this seems counterintuitive is because, almost certainly, the jumper will not be able to generate the same $v_0$ on earth and moon. We can see this as followos. The initial velocity is proportional to the integral of the force applied by the jumper's feet to the surface of the moon over time: \begin{align} \Delta v &= \frac{1}{m} \int_{t_i}^{t_f} dt \, F(t). \end{align} Assuming the force profile in time, $F(t)$ is the same on the earth and moon and is applied over the same time interval, $t_f-t_i$, we find that the initial velocity $v_0 = \Delta v$ is less on the moon owing to the larger total mass, $m_{jumper} + m_{suit}$. - Thanks a lot. This question was actually given today on a mid-term exam. However, I am taking an introductory algebraic-based physics class, so it may be quite possible that we were actually supposed to assume that $v_0$ is equal on both jumps. Is there any way to find the differences in $v_0$ without resorting to calculus? – user12277 Oct 18 '12 at 19:35 If the statement of the problem didn't specify either the assumption that $v_0$ is the same on earth and moon or more information (about the nature of the force) then it was improperly stated. I suspect that you've interpreted it correctly. Good work and thanks for giving a complete description of your solution. – MarkWayne Oct 18 '12 at 19:54 Thanks a lot! I am crossing my fingers I interpreted it correctly. The way I stated the problem is a direct copy of the way the problem was asked, and, as mentioned, this is a non-calculus based physics class. During the exam I thought perhaps our professor added the mass-information just to throw us a bit off track. Time will tell if I'm right :) – user12277 Oct 18 '12 at 19:59 @user12277 Often in exams, just the information you need is given. In this case you probably should've assume the amount of kinetic energy just at lift-off is the same. – Bernhard Oct 19 '12 at 5:42 I don't agree with Bernhard's suggestion that one should assume that the amount of kinetic energy is the same on earth and the moon. I'd be interested to know why he thinks this. Having equal amounts of kinetic energy at lift-off would require a lower speed, v_0 (inversely proportional to the ratio of the masses) on the moon. I don't see any physical reason or means by which one would maintain constant kinetic energy. – MarkWayne Oct 19 '12 at 16:50 Since this is homework, we are supposed to only give hints. Your assumption that $v_0$ is the same is incorrect. It is the only the force of the muscles that is the same. Good luck! - Thanks a lot! As mentioned above, this is an introductory physics class with no calculus. So I am actually unsure as to whether or not we were suppose to assume $v_0$ is equal or not. – user12277 Oct 18 '12 at 19:37 Suppose we know the $g_e$ ($g$ at earth) and the man has to lift his $c_m$ (center of mass) the same distance $s$ on the moon as on the earth, until he loses contact with the ground. Let $h_e$ and $h_m$ be the heigths he reaches on earth and moon respectively. Assuming the force is the same until he loses contact to the ground, on the moon and on earth then the work $W$ that produces during "take off" is also the same. This work is used to bring him at maximum height. Then on Earth: $W=m g_e h_e$, on Moon: $W = (m + m_s) g_m h_m$, and from above, $m g_e h_e=(m+m_s)g_m h_m$ and $h_m = (m / (m+m_s)) (g_e/g_m) h_e$. - Thank you so much for your answer. I actually got a confirmation from my professor today that this was indeed how we were supposed to interpret the problem. So it turns out I was wrong after all. Too bad, but at least now I will be more careful with problems such as these in the future when it comes to making assumptions. – user12277 Oct 19 '12 at 11:42 Yep. Work-energy will solve it. Sorry if caused any confusion. (Though the answers I gave are internally consistent.) – MarkWayne Oct 19 '12 at 18:46 ## protected by Qmechanic♦Jul 10 '13 at 11:53 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576038122177124, "perplexity": 266.91045835769035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160918.28/warc/CC-MAIN-20160205193920-00079-ip-10-236-182-209.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Definition:Antisymmetric_Relation/Definition_2
# Definition:Antisymmetric Relation/Definition 2 Jump to navigation Jump to search ## Definition Let $\mathcal R \subseteq S \times S$ be a relation in $S$. $\mathcal R$ is antisymmetric if and only if: $\tuple {x, y} \in \mathcal R \land x \ne y \implies \tuple {y, x} \notin \mathcal R$ ## Also known as Some sources render this concept as anti-symmetric relation. Some sources (perhaps erroneously) use this definition for an asymmetric relation. ## Antisymmetric and Asymmetric Relations Note the difference between: An asymmetric relation, in which the fact that $\tuple {x, y} \in \mathcal R$ means that $\tuple {y, x}$ is definitely not in $\mathcal R$ and: An antisymmetric relation, in which there may be instances of both $\tuple {x, y} \in \mathcal R$ and $\tuple {y, x} \in \mathcal R$ but if there are, then it means that $x$ and $y$ have to be the same object. ## Also see • Results about symmetry of relations can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060047268867493, "perplexity": 884.7680960166683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00082.warc.gz"}
https://www.physicsforums.com/threads/plate-capacitors-connected.677186/
# Plate capacitors connected 1. Mar 9, 2013 ### Woopydalan 1. The problem statement, all variables and given/known data If a 3.0-μF capacitor charged to 40 V and a 5.0-μF capacitor charged to 18 V are connected to each other, with the positive plate of each connected to the negative plate of the other, what is the final charge on the 3.0-μF capacitor? a . 11 μC b. 15 μC c. 19 μC d. 26 μC e. 79 μC 2. Relevant equations 3. The attempt at a solution The positive to negative terminal from the plates means they are connected in series, right? So if there are 210 μC available (40x3) + (5x18), shouldn't the charge be the same since charge is the same for plates connected in series? Thus maybe the average or something, yet that is not one of the possibilities. The answer is a, but not sure why 2. Mar 9, 2013 ### CWatters That would be the total charge if they were connected +ve to +ve. Better to think of the capacitors as connected in parallel but with opposite charge. 3. Mar 9, 2013 ### Woopydalan That's the same thing as being connected in series. Ok, so then do I just subtract the charges (120uC - 90uC). Then there is 30uC charge available, and since they are in series, they should have equal amounts, 15uC? Yet that isn't the answer, which is 11uC. 4. Mar 9, 2013 ### lewando Looking at the 2 capacitors from the parallel perspective, the voltage across them needs to be the same. Using Q3uF=C3uFV and Q5uF=C5uFV, the charge must be different. 5. Mar 9, 2013 ### Woopydalan why would I look at them from a parallel perspective? If the negative and positive plates of each capacitor are connected, doesn't that demand that they be in series? 6. Mar 9, 2013 ### lewando Why would you not? I am not familiar with that rule. Fact is, you can look at the two connected caps from either perspective. If you are interested in how current might flow in a loop, look at them in series. If you are interested in the voltage across 2 points (the two junctions), consider them in parallel. Last edited: Mar 9, 2013 7. Mar 9, 2013 ### CWatters In general (not this problem)... Series capacitors have equal charge (because same current flows through both) Parallel capacitors have equal voltage (because they are connected between same nodes). In this problem the voltage must end up the same but the charge will be different so it makes more sense to think of the capacitors as being in parallel. Lets say Q1 and Q2 are the initial charges and Q1' and Q2' are the final charges. We know these must rearrange so the final voltage is the same so V = Q1'/C1 = Q2'/C2 ..............(1) The total charge is conserved so Q1 - Q2 = Q1' + Q2' ...............(2) It's -ve on the left because the charge on one capacitor is the opposite polarity. Two equations and two unknowns. 8. Mar 10, 2013 ### CWatters Oops I meant "not just this problem".
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9244078397750854, "perplexity": 1671.964074776782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648207.96/warc/CC-MAIN-20180323102828-20180323122828-00220.warc.gz"}
https://www.freemathhelp.com/forum/threads/grade-8-fractions-math-help.130704/
# grade 8 fractions math help ##### New member Joined Jul 11, 2021 Messages 7 Jacques takes 3/4h to fill one shelf at the supermarket. Henri can fill the shelves in two-thirds Jacques’ time. There are 15 shelves. Henri and Jacques work together. How long will it take to fill the shelves? Justify your answer. Thank you so much! #### Dr.Peterson ##### Elite Member Joined Nov 12, 2017 Messages 14,681 Jacques takes 3/4h to fill one shelf at the supermarket. Henri can fill the shelves in two-thirds Jacques’ time. There are 15 shelves. Henri and Jacques work together. How long will it take to fill the shelves? Justify your answer. Thank you so much! I don't see this as requiring factoring (primarily). Please show us what you have tried, or at least what topics you have recently learned, so we can see where you need help and what sort of help will be useful to you. Did you not read this? ##### New member Joined Jul 11, 2021 Messages 7 I don't see this as requiring factoring (primarily). Please show us what you have tried, or at least what topics you have recently learned, so we can see where you need help and what sort of help will be useful to you. Did you not read this? Hello! Sorry I didn't see that part in the guidelines. I think I was also trying to say fractions instead of factoring. This is how I tried to solve it: Since Jacques takes 3/4 hours to fill one shelf, for 15 shelves, it would take (3/4 hours) x (15 shelves) = 11.25 hours Then, Henri takes 2/3 of Jacques time, so it would take (11.25 hours) x (2/3) = 7.5 hours I then did 11.25 - 7.5 = 3.75 hours Recently, I've been learning about basic calculations with fractions like adding and subtracting, multiplying and dividing. I have a bit of difficulty with word problems though, so I would appreciate the help. Thank you. #### Dr.Peterson ##### Elite Member Joined Nov 12, 2017 Messages 14,681 Hello! Sorry I didn't see that part in the guidelines. I think I was also trying to say fractions instead of factoring. This is how I tried to solve it: Since Jacques takes 3/4 hours to fill one shelf, for 15 shelves, it would take (3/4 hours) x (15 shelves) = 11.25 hours Then, Henri takes 2/3 of Jacques time, so it would take (11.25 hours) x (2/3) = 7.5 hours I then did 11.25 - 7.5 = 3.75 hours Recently, I've been learning about basic calculations with fractions like adding and subtracting, multiplying and dividing. I have a bit of difficulty with word problems though, so I would appreciate the help. Thank you. Thanks. You did that part well, up to the subtraction at the end. My next question is, have you seen any "time to do a job" problems before, perhaps without fractions? This is commonly taught as part of algebra, but doesn't require it. The basic idea is that this is really a rate problem. You need to find the rate (in shelves per hour, or perhaps supermarkets per hour) for each person, and then add them, because, for example, if I can do something at a rate of 2 tasks per hour, and you can do it at 3 tasks per hour, then together (if we don't interact to help or hinder one another), together we will do 5 tasks per hour. Does any of that sound familiar? Try doing whatever you can with this idea. #### Otis ##### Elite Member Joined Apr 22, 2015 Messages 4,320 Hello PHMTY. We use reciprocals, when dealing with 'combined work rates' type problems. See this page, for some worked examples. For example, if it takes me 15 minutes to complete a task, then I complete 1/15th of the job per minute. (1/15 is the reciprocal of 15.) If it takes you 10 minutes to complete the same task, then you complete 1/10th of the job per minute. (1/10 is the reciprocal of 10.) We add the reciprocals, to find the fractional amount of the task completed per time unit when we work together. 1/15 + 1/10 = 1/6 In other words, working together, we complete 1/6th of the task per minute. We consider the number 1 to represent 100% of the task. Therefore, 1/6th of the job done per minute means that it takes us 6 minutes, working together. (6 times 1/6 equals 1.) Go through some worked examples, at the link above. See if that helps. Post your attempt, if you'd like more help. ##### New member Joined Jul 11, 2021 Messages 7 Thanks. You did that part well, up to the subtraction at the end. My next question is, have you seen any "time to do a job" problems before, perhaps without fractions? This is commonly taught as part of algebra, but doesn't require it. The basic idea is that this is really a rate problem. You need to find the rate (in shelves per hour, or perhaps supermarkets per hour) for each person, and then add them, because, for example, if I can do something at a rate of 2 tasks per hour, and you can do it at 3 tasks per hour, then together (if we don't interact to help or hinder one another), together we will do 5 tasks per hour. Does any of that sound familiar? Try doing whatever you can with this idea Ok that makes a lot more sense. I tried starting all over like this: For Jacques, it would be 1 shelf per 3/4 hours, so (1)/(3/4) = 1.33 shelves/hour For Henri, 1 shelf per 1/2 hours, so (1)/(1/2) = 2 shelves/hour So 1.33 + 2 = 3.33 shelves/hour altogether. Finally, for 15 shelves it would be 15/3.33 = 4.5 hours? ##### New member Joined Jul 11, 2021 Messages 7 Hello PHMTY. We use reciprocals, when dealing with 'combined work rates' type problems. See this page, for some worked examples. For example, if it takes me 15 minutes to complete a task, then I complete 1/15th of the job per minute. (1/15 is the reciprocal of 15.) If it takes you 10 minutes to complete the same task, then you complete 1/10th of the job per minute. (1/10 is the reciprocal of 10.) We add the reciprocals, to find the fractional amount of the task completed per time unit when we work together. 1/15 + 1/10 = 1/6 In other words, working together, we complete 1/6th of the task per minute. That means it takes us 6 minutes, working together. (6 is the reciprocal of 1/6.) Go through some worked examples, at the link above. See if that helps. Post your attempt, if you'd like more help. Thank you! I'll take a look at the examples later as well. #### Otis ##### Elite Member Joined Apr 22, 2015 Messages 4,320 Oh, I forgot to mention the following. Don't convert your fractions to decimal form, as you work. Use only the fractional forms. That way, after adding up the fractional amounts of the job done by each person, you may simply take the reciprocal of the total. $$\;$$ ##### New member Joined Jul 11, 2021 Messages 7 Oh, I forgot to mention the following. Don't convert your fractions to decimal form, as you work. Use only the fractional forms. That way, after adding up the fractional amounts of the job done by each person, you may simply take the reciprocal of the total. $$\;$$ ohh ok thanks I'll keep that in mind #### Otis ##### Elite Member Joined Apr 22, 2015 Messages 4,320 (1)/(3/4) = 1.33 15/3.33 = 4.5 hours? 4.5 hours is a correct answer. You method is okay. But, if you're expected to use exact arithmetic (instead of calculator approximations), there might be an issue with your work on a class assignment. 1/(3/4) is not 1.33 (it's actually 4/3). 1.33 is only a decimal approximation for 4/3. 1.33 is exactly 133/100. Likewise, 15/3.33 is not 4.5 15/3.33 is actually 500/111. ##### New member Joined Jul 11, 2021 Messages 7 4.5 hours is the correct answer. You method is okay. But, if you're expected to use exact arithmetic (instead of calculator approximations), there might be an issue with your work on a class assignment. 1/(3/4) is not 1.33 (it's actually 4/3). 1.33 is a decimal approximation for 4/3. 15/3.33 is not 4.5 (it's actually 45/4). ah so for the calculations, instead of changing them to decimal form, I would keep the fractions and go like this 4/3 + 2 = 10/3 shelves per hour then for 15 shelves: (15) / (10/3) = 45/10 hours #### Otis ##### Elite Member Joined Apr 22, 2015 Messages 4,320 Very good. Here's another way to go. Mr. J fills a shelf in 3/4 hour. 15 × 3/4 = 45/4 Mr. J completes the job in 45/4 hour. Mr. H requires 2/3 the time Mr. J does, to complete the job. 2/3 × 45/4 = 15/2 Therefore, Mr. J completes 4/45ths of the job per hour, and Mr. H completes 2/15ths of the job per hour. We combine those reciprocals. 4/45 + 2/15 = 2/9 Working together, they complete 2/9ths of the job per hour, so it takes them 9/2 hour to finish. 9/2 = 4.5 ##### New member Joined Jul 11, 2021 Messages 7 Very good. Here's another way to go. Mr. J fills a shelf in 3/4 hour. 15 × 3/4 = 45/4 Mr. J completes the job in 45/4 hour. Mr. H requires 2/3 the time Mr. J does, to complete the job. 2/3 × 45/4 = 15/2 Therefore, Mr. J completes 4/45ths of the job per hour, and Mr. H completes 2/15ths of the job per hour. We combine those reciprocals. 4/45 + 2/15 = 2/9 Working together, they complete 2/9ths of the job per hour, so it takes them 9/2 hour to finish. 9/2 = 4.5 haha ok got it! thank you so much it took me so long to figure this out #### jonah2.0 ##### Full Member Joined Apr 29, 2014 Messages 517 Beer induced filibuster follows. haha ok got it! thank you so much it took me so long to figure this out Hard earned gains are not easily forgotten. Basic principle of no pain no gain mantra. One has a tendency to remember and value them permanently. Last edited: #### HallsofIvy ##### Elite Member Joined Jan 27, 2012 Messages 7,765 Jacques takes 3/4h to fill one shelf at the supermarket. Henri can fill the shelves in two-thirds Jacques’ time. There are 15 shelves. Henri and Jacques work together. How long will it take to fill the shelves? Justify your answer. Thank you so much! 2/3 of 3/4 is, of course, 2/4= 1/2 hour. Jacques, filling one shelf in 3/4 hour is working at a rate of 4/3 shelves per hour. Henri, filling one shelf in 1/2 hour, is workingt at a rate of 2 shelves per hour. When people work together, their rates add. So Jacques and Henri, working together fill 4/3+ 2= 4/3+ 6/3= 10/3 shelves per hour. Together they can fill 15 shelves in 15/(10/3)= 15(3/10)= 9/2 or 4 and a half hours. #### eddy2017 ##### Elite Member Joined Oct 27, 2017 Messages 2,525 this problem drew my attention. i tried to solve it using another way and i'm bringin' it before you to approve of it or destroy it. I did exactly what the poster did until i found how much hours each of them work. Jason 3/4 * 15 =45/4=11.25 hrs Henry 2/3 * 11.25 = 7.5 hrs here i veered direction and went this way, (though might be the wrong way!) i set up this formula to find out combined rate of work t/A + t/B =1 where t = the amount of time working together to accomplish the task i let Jason be A I let Henry be B So, again, here's the formula t/A + t/B =1 i'll plug in what i have t/11.25 + t/7.5 =1 i will round up and down to make finding a common denominator easier t/11 + t/8 =1 common denominator of 8 and 11 =88 ___ + _____ 88 88 now i will have to multiply both fractions for a number that yields 88. t/11 (8/8)= 8t/88 t/8 (11/11) =11t/88 so now i can add this fractions 8t/88 + 11t/88 = 19t/88 19t/88=1 solvin' for t 88( 19t/88) = 1 * 88 19t=88 19t/19 =88/19 =4.6 which is pretty close to the result he got. if Jason and Henri work together they will fill the shelves in approximately 4.6 hours. #### Dr.Peterson ##### Elite Member Joined Nov 12, 2017 Messages 14,681 i'll plug in what i have t/11.25 + t/7.5 =1 i will round up and down to make finding a common denominator easier t/11 + t/8 =1 No. Rounding means you'll be solving a different problem. It's pure luck that your answer is fairly close to the right answer. There are several ways to make it easier without rounding. One is to just multiply both sides of the equation by 11.25*7.5, which is the actual LCM. You get 7.5t + 11.25t = 84.375. Add and divide, and you get t = 84.375/18.75 = 4.5 hours. Another way is not to use decimals at all; the two times are 11 1/4 = 45/4 and 7 1/2 = 15/2, so the equation is (4/45)t + (2/15)t = 1. The LCD is 45, so we multiply by that and get 4t + 6t = 45, so that 10t = 45 and t = 45/10 = 4.5 again. #### eddy2017 ##### Elite Member Joined Oct 27, 2017 Messages 2,525 awfully good!.. thank you for rectifying, Doc. t/11.25 + t/7.5 =1 multiply both sied by 11.25*7.5 11.25*7.5( t/11.25)+t/7.5 = 11.25*7.5(1) 7.5t+ 11.25 t= 84.375 18.5t=84.375 18.5t/18.5 =84.375/18.5 t=4.56081081 t=4.5 wow! Last edited: #### Dr.Peterson ##### Elite Member Joined Nov 12, 2017 Messages 14,681 7.5t+ 11.25 t= 84.375 18.5t=84.375 Please check that. What is 7.5 + 11.25? t=4.56081081 t=4.5 I showed you the correct work, completely. It gives the exact answer, not one you have to round (incorrectly, even!) at the end. Joined Oct 27, 2017 Messages 2,525
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8091956377029419, "perplexity": 1940.6231605215105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500384.17/warc/CC-MAIN-20230207035749-20230207065749-00175.warc.gz"}
http://mathhelpforum.com/trigonometry/179049-find-terms-b.html
# Math Help - Find a in terms of b 1. ## Find a in terms of b For $\pi with $cos(a) = -sin(b)$ where $0, find $a$ in terms of $b$. I am terrible at maths so please show any simple easy to understand working out so that I can understand better. Any help would be appreciated! 2. Originally Posted by Joker37 For $\pi with $cos(a) = -sin(b)$ where $0, find $a$ in terms of $b$. I am terrible at maths so please show any simple easy to understand working out so that I can understand better. Any help would be appreciated! Recall the following "transition formula" for going from cosine to sine, where $x$ is arbitrary: $\cos(x) = \sin(x+\frac{\pi}{2}).$ Since the numbers $\frac{\pi}{2}$ and $-\frac{3\pi}{2}$ are different by $2\pi$, and since sine is periodic with period $2\pi$, we have also that $\sin(x+\frac{\pi}{2})=\sin(x-\frac{3\pi}{2}),$ whence $\cos(x) = \sin(x-\frac{3\pi}{2}).$ Since $\pi, we have that $a-\frac{3\pi}{2}$ lies in the interval $(-\frac{\pi}{2},0).$ Since you want to solve $\cos(a) = -\sin(b)$ for $a$, we have by the transition formula above that we might just as well solve $\sin(a-\frac{3\pi}{2}) = -\sin(b)=\sin(-b).$ for $a$, where the last equality follows from sine being an odd function. Since $0, we have that $-b$ lies in the interval $(-\frac{\pi}{2},0)$. Hence both $a-\frac{3\pi}{2}$ and $-b$ are numbers in the interval $(-\frac{\pi}{2},0)$, on which sine is injective, so in solving $\sin(a-\frac{3\pi}{2}) =\sin(-b)$ for $a$, we can apply $\sin^{-1}$ to both sides to obtain $a-\frac{3\pi}{2} = -b,$ or $a = -b+\frac{3\pi}{2}.$ 3. I don't understand all of the above. Below was my attempt: $cos(a)=-sin(b)$ $cos(\pi+a)=-cos(\frac{\pi}{2}-b)$ $\pi+a=\frac{\pi}{2}-b$ $a=\frac{-\pi}{2}-b$ $\frac{3\pi}{2}-b$ Where did I go wrong? 4. Originally Posted by Joker37 I don't understand all of the above. Below was my attempt: $cos(a)=-sin(b)$ $cos(\pi+a)=-cos(\frac{\pi}{2}-b)$ How is $cos(a) = cos( \pi + a)$ -Dan 5. Originally Posted by topsquark How is $cos(a) = cos( \pi + a)$ -Dan $cos(a)=-sin(b)$ $-cos(\pi+a)=-cos(\frac{\pi}{2}-b)$ ? But still, how would this change the result? edit: $-\pi -a = \frac{\pi}{2}-b$ ? edit2: $a=\frac{3\pi}{2}-b$ Is this the right methodology? 6. Originally Posted by Joker37 $cos(a)=-sin(b)$ $-cos(\pi+a)=-cos(\frac{\pi}{2}-b)$ ? But still, how would this change the result? edit: $-\pi -a = \frac{\pi}{2}-b$ ? edit2: $a=\frac{3\pi}{2}-b$ Is this the right methodology? Looks good to me. But as HappyJoe's post indicates there are a few landmines we need to step over to say that we can do this. -Dan 7. Originally Posted by topsquark Looks good to me. But as HappyJoe's post indicates there are a few landmines we need to step over to say that we can do this. -Dan What sort of things? I don't really understand HappyJoe's working out. I'm terrible at maths. 8. Originally Posted by Joker37 What sort of things? I don't really understand HappyJoe's working out. I'm terrible at maths. I think it's best for you to go through HappyJoe's post and let us know what line or lines you don't understand. We can help you better that way. -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 53, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8981014490127563, "perplexity": 964.2658141695393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375091751.85/warc/CC-MAIN-20150627031811-00109-ip-10-179-60-89.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/pre-rmo-201418/
Pre-RMO 2014/18 Let $f$ be a one-to-one function from the set of natural numbers to itself such that $f(mn) = f(m)f(n)$ for all natural numbers $m$ and $n$. What is the least possible value of $f(999)$? This note is part of the set Pre-RMO 2014 Note by Pranshu Gaba 6 years, 8 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. • Stay on topic — we're all here to learn more about math and science, not to hear about your favorite get-rich-quick scheme or current world events. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Comments Sort by: Top Newest We can write $f(999) = (f(3))^3 \cdot f(37)$ To minimize this expression we can define the function $f$ as follows: If $x$ is composite, it must be prime factorized, as $f(999)$ is factorized above. Then: $f(x) = \begin{cases} 1 & x = 1 \\ 37 & x = 2 \\ 2 & x = 3 \\ 3 & x = 37 \\ p & x = p, p \text{ is a prime} \neq 2, 3, 37\end{cases}$ This gives $f(999) = 8 \times 3 = \boxed{24}$. - 6 years, 8 months ago Log in to reply Good. But you need to justify that this function is one-one. This is not required in the exam, but still as a matter of learning, one should leave no loop hole. - 6 years, 7 months ago Log in to reply It is clearly justified, isn't it. For 1,2,3,37, it is visible that it is one -one. For all numbers other than 2, 3, 37 it is one-one because if the number is prime then it maps the prime number to itself hence one-one, and if it is composite then it can be written as a product of its primes which will yield product of one-one functions which is one-one itself.(as the function gives unique primes). - 3 years ago Log in to reply thanks Geeta - 2 years, 10 months ago Log in to reply $f(x) = x^{n}$ (n is odd number) Least possible value of $f(999) = 0, \text{when n }\to -\infty$ - 6 years, 8 months ago Log in to reply I don't think $f(x) = x^n$ is a one-to-one function, as not all natural numbers are included in the range of $f(x)$. Also, if $n$ is negative, $f(x)$ will be a fraction, but we want it to be a natural number. - 6 years, 8 months ago Log in to reply $f(x)=x^n$ is indeed a one-one function if $n$ is odd, but it is not mapped on set of Natural numbers if $n$ is negative, so this function does not satisfy the given conditions. - 6 years, 7 months ago Log in to reply Is the answer 24? - 6 years, 8 months ago Log in to reply × Problem Loading... Note Loading... Set Loading...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 28, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704092144966125, "perplexity": 1659.6352298742543}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00565.warc.gz"}
https://homework.zookal.com/questions-and-answers/evaluate-the-following-limit-either--using-lhopitals-rule-please-613528213
1. Math 2. Calculus 3. evaluate the following limit either using lhopitals rule please... # Question: evaluate the following limit either using lhopitals rule please... ###### Question details Evaluate the following limit (either ) using L'Hopital's Rule Please explain which form the different steps are in and why. Like the or
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986385107040405, "perplexity": 4091.3687644197576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487608702.10/warc/CC-MAIN-20210613100830-20210613130830-00360.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-5-cumulative-review-page-411/20
## Algebra: A Combined Approach (4th Edition) $-4x^{2}+24x-4$ Step 1: $4(-x^{2}+6x-1)$ Step 2: Applying the distributive property, we get $4(-x^{2})+4(6x) +4(-1)$ Step 3: Multiplying, the expression becomes $-4x^{2}+24x-4$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9416301250457764, "perplexity": 972.257656905827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946564.73/warc/CC-MAIN-20180424041828-20180424061828-00506.warc.gz"}
http://mathoverflow.net/questions/41984/smooth-bijection-between-non-diffeomorphic-smooth-manifolds?sort=oldest
# Smooth bijection between non-diffeomorphic smooth manifolds? The "textbook" example of a smooth bijection between smooth manifolds that is not a diffeomorphism is the map $\mathbb{R} \rightarrow \mathbb{R}$ sending $x \mapsto x^3$. However, in this example, the source and target manifolds are diffeomorphic -- just not by the given map. Is there an example of a smooth bijection $X \rightarrow Y$ of smooth manifolds where $X,Y$ are not diffeomorphic at all? (and if so, what?) (For instance, is it possible to arrange a smooth bijection from a sphere to an exotic sphere, failing to be a diffeomorphism because of the existence of critical points? or do homeomorphisms between different smooth structures on spheres fail to be everywhere smooth in some catastrophic way?) - I guess the word "manifold" forbids me from just rolling the half-open interval around the circle, and perhaps also from sticking continuum-many discrete points on the line.... – Theo Johnson-Freyd Oct 13 '10 at 8:00 Alternately, maybe you don't want to rest that much on the word "manifold", and instead mean to ask for smooth homeomorphisms that are not diffeomorphisms? – Theo Johnson-Freyd Oct 13 '10 at 8:01 Hi Theo, yes, "manifold" with (what I think of as) the default meaning -- second countable, without boundary, etc.... – D. Savitt Oct 13 '10 at 8:07 (and of course a smooth homeomorphism would be even better, but I'd be happy with just a smooth bijection) – D. Savitt Oct 13 '10 at 8:09 In the compact case, every continuous bijection is a homeomorphism. – Greg Kuperberg Oct 13 '10 at 8:36 Every smooth manifold has a smooth triangulation, which yields a pseudofunctor from the category of smooth manifolds to the category of PL manifolds. (There is no actual functor; that would be crazy.) If two smooth manifolds are PL isomorphic, then the answer is yes. You can start with the PL isomorphism, and then build a homeomorphism that follows it and that has the property that all derivatives vanish in all directions perpendicular to every simplex. You can build the homeomorphism by induction from the $k$-skeleton to the $(k+1)$-skeleton using bump functions. The PL Poincaré conjecture is true in dimensions other than 4, so all exotic spheres in the same dimension $n \ge 5$ are PL homeomorphic. (High-dimensional examples of exotic spheres start in dimension 7, it was calculated.) In dimension 4, by contrast, every PL manifold has a unique smooth structure, and it is not known whether there are any exotic spheres. On the other hand, if the manifolds are homeomorphic but not even PL homeomorphic, then I don't know. It is known that every manifold of dimension $n \ge 5$ has a unique Lipschitz structure, but I do not know a Lipschitz version of the above argument. On the positive side, passing from smooth to Lipschitz is an actual functor, so the answer to a modified question, is there a Lipschitz-smooth homeomorphism, is yes, and you can even make it bi-Lipschitz.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220088124275208, "perplexity": 343.2530461185866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398538144.64/warc/CC-MAIN-20151124205538-00236-ip-10-71-132-137.ec2.internal.warc.gz"}
https://crypto.stackexchange.com/questions/76386/understanding-simplification-steps-when-solving-complicated-equations-in-galois
# Understanding simplification steps when solving complicated equations in Galois Field I just encountered a problem when I tried to understand a basepoint conversion from x25519 to ed25519. I can't really wrap my head around how the value of $$x$$ can be the stated value below? Can someone please, go through the derivation steps of $$x$$, especially the modular arithmetic part? $$x^2=\frac{1-(4/5)^2}{-1+(121665/121666)\cdot(4/5)^2} \text.$$ Now by considering the symmetric representants (that is, from the set $$\{-(q-1)/2,\dots,(q-1)/2\}$$) for the elements of $$\mathbb F_q$$, we can interpret one of the roots of this equation as being positive and the other as negative, corresponding to the signedness of their symmetric representant. The unique "positive" solution to the equation above is precisely $$15112221349535400772501151409588531511454012693041857206046113283949847762202$$. All these operations are in a finite field, in particular, a prime field with modulus $$p = 2^{255}-19$$. So, for example, $$4/5$$ is not $$0.80$$, but $$4$$ times the inverse of $$5$$ modulo $$p$$ (i.e. the number that, when multiplied with $$5$$, results in the remainder $$1$$ when divided by $$p$$). In particular, $$1/5$$ is $$11579208923731619542357098500868790785326998466564056403945758400791312963990$$ (because that times 5 is equal to 1 when reduced modulo $$p$$). The same applies to multiplication, addition and so on - everything is modulo $$p$$. sage: F = GF(2^255-19)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9794041514396667, "perplexity": 216.7910836644696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371700247.99/warc/CC-MAIN-20200407085717-20200407120217-00307.warc.gz"}
https://brilliant.org/problems/11-is-really-lovely-well-bring-it-everywhere/
# 11 is really lovely, we'll bring it everywhere! Calculus Level 3 Let the sum of the following infinite sequence be $$N$$. $1,\dfrac{1}{2},\dfrac{1}{2},\dfrac{1}{6},\dfrac{1}{4},\dfrac{1}{12},\dfrac{1}{8},\dfrac{1}{20},\dfrac{1}{16},\dfrac{1}{30},\dfrac{1}{32},\dfrac{1}{42} \ldots \ .$ Find the $$\textbf{remainder}$$ when $$\displaystyle \Bigl(N^{2014}+1\Bigr)$$ is divided by $$\color{Purple}{\textbf{11}}$$. Hint:- Is it really $$\textbf{a}$$ sequence? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986082911491394, "perplexity": 1252.8456137608352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00298-ip-10-171-10-70.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/151315/frictional-force-require-to-balance-leaning-sticks
Frictional force require to balance leaning sticks Here is the question (from Morin's Introduction to Classical Mechanics book): One stick leans on another as shown in Fig. 2.21. A right angle is formed where they meet, and the right stick makes an angle $\theta$ with the horizontal. The left stick extends infinitesimally beyond the end of the right stick. The coefficient of friction between the two sticks is $\mu$. The sticks have the same mass density per unit length and are both hinged at the ground. What is the minimum angle $\theta$ for which the sticks don't fall? My first approach to solving this problem was considering the intersection of the sticks to be the pivot point. Here, the force of friction would not apply, since it is a force on the pivot point. The only forces that apply would be the forces of gravity acting downward on the two sticks, which balance the torque out. However, the actual answer involves the coefficient of friction: $$(\tan{\theta})^{2} \ge \frac{1}{\mu}.$$ Obviously, my reasoning for this problem will not yield this answer, so I ask, where has my reasoning gone wrong? • You had $1/u$ but stated the actual answer involves the coefficient of friction that you had previously denoted as $\mu$. Please confirm/deny my correction to your equation. – Kyle Kanos Dec 9 '14 at 3:06 • Yes, you are correct. Sorry about that. – user3904840 Dec 9 '14 at 3:08 • Alright, I will attach that soon. – user3904840 Dec 9 '14 at 3:10 • @user3904840: No worries, not all users are proficient in using Latex (the equation editor). – Kyle Kanos Dec 9 '14 at 3:10 • The figure is attached. – user3904840 Dec 9 '14 at 3:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9002262949943542, "perplexity": 520.7892664372673}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00576.warc.gz"}
https://mitchwaite.com/czjvz/bced9f-dark-matter-density-from-cmb
# dark matter density from cmb Dark matter plus normal matter add up to 31.5% of the total density. Thus, the current universe is matter-dominated. As we raise the physical density of the dark matter, travels after recombination. 3. These are the most sensitive and accurate measurements of fluctuations in the cosmic microwave background (CMB) radiation to date. Fig.2: Angular power spectrum of CMB temperature fluctuations. Having a third peak that is The thumbnail on the right is my simplified way of showing how these data, combined with the CMB measurement of the acoustic scale length at z = 1089, and the supernova measurement of the acceleration of the expansion of the Universe, provide enough information to simultaneously determine the current matter density, the current dark energy density and the rate of change of the dark energy density. CMB indicates the total energy density is close to critical (flat universe) Many observations indicate that the dark matter energy density is sub-critical; Dark energy is required to make these statements consistent; Amount of dark energy is consistent with that needed to explain distant supernovae; Why introduce the mysterious dark energy into the game? travels after recombination. Measure the small-scale matter power spectrum from weak gravitational lensing using the CMB as a backlight; with this, CMB-HD aims to distinguish between a matter power spectrum predicted by models that can explain observational puzzles of small-scale structure, and that predicted by vanilla cold dark matter (CDM), with a significance of at least 8σ. Dark Energy. The CMB is detectable as a faint background of microwaves, which we measure with specialized telescopes in remote locations like the high Andes and the South Pole. Photons could not travel freely, so no light escaped from those earlier times. There are several ways we can do this (Roos 2012): (1) We have models of nucleosynthesis during the era shortly after the Big Bang (before the formation of the first stars). are sensitive to the dark matter density Dark Matter, Dark Energy values refined. Neff , with a 1σ uncertainty of σ(Neff ) = 0.014. Before the creation of the CMB, the universe was a hot, dense and opaque plasma containing both matter and energy. 1. The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang theory. (Original figure by Benjamin Wallisch in arXiv:1903.04763 and arXiv:1810.02800; modified with addition of CMB-HD limit. This is particularly important because many dark matter models predict new light thermal particles, and recent short-baseline neutrino experiments have found puzzling results possibly suggesting new neutrino species. https://arxiv.org/pdf/1906.10134.pdf, Using Astronomical Telescopes to Study Unseen Matter. Dark matter density parameter: Ω c: 0.2589 ± 0.0057: Matter density parameter: Ω m: 0.3089 ± 0.0062: Dark energy density parameter: Ω Λ: 0.6911 ± 0.0062: Critical density: ρ crit (8.62 ± 0.12) × 10 −27 kg/m 3: The present root-mean-square matter fluctuation averaged over a sphere of radius 8h – 1 Mpc σ 8: 0.8159 ± 0.0086: Redshift at decoupling z ∗ 1 089.90 ± 0.23 The age of the universe at decoupling—that is, when the CMB … CMBÞ, while dark photons that constitute the cold dark matter must be a collection of nonthermal particles with a number density far larger than nγ and an energy spectrum peaked very close to m A0 (for the sake of completeness, we will also address the possible existence of dark photons with a very small initial number density). Baryonic dark matter. 3. Used with permission. an indication that dark matter dominated the matter density in the when at least three peaks are precisely measured. of the universe. Dark Matter Density Key Concepts. Dark Matter WrittenAugust2019byL.Baudis(UniversityofZurich)andS.Profumo(UCSantaCruz). This in turn reveals the amount ofenergy emitted by different sized "ripples" of sound echoing through the early matter ofthe universe. After this, photons no longer scatter with matter but propagate freely. the third peak is the cleanest test of this behavior. Planck's measurement is a little bit more complicated. As Planck has better resolution than WMAP, it's able to tell a little bit more about things. This cosmic microwave background can be observed today in the (1– 400)GHz range. between dark matter and the baryons2. The early structure of the universe as seen in the Cosmic Microwave Background (CMB) can berepresented by an angular power spectrum, a plot that shows how the temperature pattern in the early universevaries with progressively measuring smaller and smaller patches of the sky. The combination of the CMB and supernova data allows to estimate independently the matter density and the density due to dark energy, shown in Fig. So far as I understand, it points to dark matter because: For the sheer number of galaxies we observe in the universe to form without dark matter, primordial baryonic density fluctuations would have to be huge. The characteristics of these sound waves in turn reveal the nature of the universe through whi… Each variant of dark energy has its own equation of state that produces a signature in the Hubble diagram of the type Ia supernovae (Turner 2003). ; With three peaks, its effects are distinct from the baryons; Measuring the dark matter density resolves the main ambiguity in the curvature measurement They can also test its composition, probing the energy density and particle mass of di erent dark-matter and dark-energy components. The CMB shows matter accounts for 30% of the critical density and the total is 1. The cosmic microwave background (CMB) is thought to be leftover radiation from the Big Bang, or the time when the universe began. We explore a model of neutrino self-interaction mediated by a Majoron-like scalar with sub-MeV mass, and show that explaining the relic density of sterile neutrino dark matter implies a lower bound on the amount of extra radiation in early universe, in particular $\Delta N_{\rm eff}>0.12$ at the CMB … boosted to a height comparable to or exceeding the second peak is This figure shows the new constraints on the values of dark energy and matter density provided by the ACT CMB weak lensing data. recombination and hence how far sound can travel relative to how far light density also affects the baryon loading since the dark matter Measure the small-scale matter power spectrum from weak gravitational lensing using the CMB as a backlight; with this, CMB-HD aims to distinguish between a matter power spectrum predicted by models that can explain observational puzzles of small-scale structure, and that predicted by vanilla cold dark matter (CDM), with a significance of at least 8σ. and baryons still plays a role in the first and second peaks so that This is the leading order ambiguity at a given peak such that its amplitude decreases. These ranges are unexplored to date and complementary with other cosmological searches for the imprints of axion-like particles on the cosmic density field. the driving effect goes away Measure the number of light particle species that were in thermal equilibrium with the known standard-model particles at any time in the early Universe, i.e. That would leave us with pretty big variations in the CMB in the present day, which we don't observe. CMB-HD would explore the mass range of 10−14 eV < ma < 2 × 10−12 eV and improve the constraint on the axion coupling constant by over 2 orders of magnitude over current particle physics constraints to gaγ < 0.1 × 10−12 GeV−1. ; Lowering the dark matter density eliminates the baryon loading effect so that a high third peak is an indication of dark matter. This new bound excludes the most of the viable parameter Constrain or discover axion-like particles by observing the resonant conversion of CMB photons into axions in the magnetic fields of galaxy clusters. The fact that so much dark matter still seems to be around some 13.7 billion years later tells us right away that it has a lifetime of at least 10 17 seconds (or about 3 billion years), Toro says. when at least three peaks are precisely measured. wells of dark matter. . Measurements of cosmic microwave background (CMB) anisotropies provide strong evidence for the existence of dark matter and dark energy. The density of matter $\Omega_M$ can be broken down into baryonic and nonbaryonic matter (dark matter). CMB-HD has the opportunity to provide a world-leading probe of the electromagnetic interaction between axions and photons using the resonant conversion of CMB photons and axions in the magnetic field of galaxy clusters, independently of whether axions constitute the dark matter. 2. (Formally, the matter to radiation ratio but the Matter Density, Ω m. The Ω m parameter specifies the mean present day fractional energy density of all forms of matter, including baryonic and dark matter. Dark Matter 26. An Ultra-Deep, High-Resolution Millimeter-Wave Survey Over Half the Sky, September 2019, This measurement would be a clean measurement of the matter power spectrum on these scales, free of the use of baryonic tracers. Raising the dark matter density reduces the overall, Lowering the dark matter density eliminates the baryon We see here that that ambiguity will be resolved Figure 2: Constraints on dark energy density (Ω Λ) and on matter density (Ω m). So far as I understand, it points to dark matter because: For the sheer number of galaxies we observe in the universe to form without dark matter, primordial baryonic density fluctuations would have to be huge. in the universe. loading effect so that a high third peak is an indication of, , of the universe. the third peak is the cleanest test of this behavior. Astronomers studying the cosmic microwave background (CMB) have uncovered new direct evidence for dark energy – the mysterious substance that appears to be accelerating the expansion of the universe. Although this predictions as to the mass of this dark matter, total mass, and mass of the individual particle, i.e 100 gev. In a universe where the full critical energy density comes from atoms and dark matter only, the weak gravitational potentials on very long length scales – which correspond to gentle waves in the matter density – evolve too slowly to leave a noticeable imprint on the CMB photons. The Planck satellite, launched by the European Space Agency, made observations of the cosmic microwave background (CMB) for a little over 4 years, beginning in August, 2009 until October, 2013. Reionization kSZ has also been included as a foreground here. potential wells go away leaving As the theory … The data points thus far favor the theoretical expectations for inflation+cold dark matter (upper curve) over those for topological defect theories (lower curve, provided by Uros Seljak). Neff , with a 1σ uncertainty of σ(Neff ) = 0.014. nothing for the baryons to fall into. Matter Density, Ω m. The Ω m parameter specifies the mean present day fractional energy density of all forms of matter, including baryonic and dark matter. of the first peak in particular, changes as we change the dark matter density. There are various hypotheses about what dark matter could consist of, as set out in the table below. Note that the self-gravity of the photons CMB-HD would explore the mass range of 10 −14 GeV < m a < 2 × 10 −12 GeV and improve the constraint on the axion coupling … This cosmic microwave background can be observed today in the (1– 400)GHz range. The matter to radiation ratio also controls the age of the universe at Results from Planck’s first 1 year and 3 months of observations were released in March, 2013. ), Sehgal, N et al, CMB-HD: As we raise the physical density of the dark matter, at a given peak such that its amplitude decreases. recombination and hence how far sound can travel relative to how far light They can also test its composition, probing the energy density and particle mass of different dark-matter and dark-energy components. its effects are distinct from the baryons, As advertised the acoustic peaks in the power spectrum 1 26. The first evidence for the ∼70% dark energy in the universe came from observations of … in the measurement of the spatial curvature Why not just say that the flatness of the universe … CMB-HD has the opportunity to provide a world-leading probe of the electromagnetic interaction between axions and photons using the resonant conversion of CMB photons and axions in the magnetic field of galaxy clusters, independently of whether axions constitute the dark matter. This is the leading order ambiguity Constrain or discover axion-like particles by observing the resonant conversion of CMB photons into axions in the magnetic fields of galaxy clusters. from the baryonic effects with at least three Shows that CMB-HD can achieve σ(Neff ) = 0.014, which would cross the critical threshold of 0.027. Although this Such a measurement would rule out or find evidence for new light thermal particles with at least 95% confidence level. Note that the self-gravity of the photons We see here that that ambiguity will be resolved radiation density is fixed in the standard model.). In this research highlight, I will describe a new method by which the CMB may help solve the mystery of dark matter. Gray contours are constraints from DES data on weak gravitational lensing, large-scale structure, supernovae, and BAO. A combined analysis gives dark matter density $\Omega_c h^2 = 0.120\pm 0.001$, baryon density $\Omega_b h^2 = 0.0224\pm 0.0001$, scalar spectral index $n_s = 0.965\pm 0.004$, and optical depth $\tau = 0.054\pm 0.007$ (in this abstract we quote $68\,\%$ confidence regions on measured parameters and $95\,\%$ on upper limits). 26.1 The case for dark matter Modern cosmological models invariably include an electromagnetically close-to-neutral, non- Its value, as measured by FIRAS, of 2.7255 0.0006 K has an extraordinarily small uncertainty of 0.02%. Their findings could also help map the structure of dark matter on the universe’s largest length scales. Their energy (and hence the temperature) is redshifted to T 0 = 2:728K today, corresponding to a density of about 400 photons per cm3. and baryons still plays a role in the first and second peaks so that Dark energy contributes the remaining 68.5%. Green contours are the best available constraints, derived from CMB, supernovae, and BAO data. That would leave us with pretty big variations in the CMB in the present day, which we don't observe. (Figure credit: Wayne Hu). radiation density is fixed in the standard model.). It has a perfect blackbody spectrum. Measure the number of light particle species that were in thermal equilibrium with the known standard-model particles at any time in the early Universe, i.e. These parameters include the density of dark matter and baryonic matter, as well as the age of the Universe. It would greatly limit the allowed models of dark matter and baryonic physics, shedding light on dark-matter particle properties and galaxy evolution. Their energy (and hence the temperature) is redshifted to T 0 = 2:728K today, corresponding to a density of about 400 photons per cm3. This would potentially rule out or find evidence for new light thermal particles with 95% (2σ) confidence level. The spherical-harmonic multipole number, , is conjugate to the separation angle . The photon-baryon uid stops oscillating at decoupling, when the baryons release the photons. Another parameter, often overlooked, is the mean CMB temperature (a.k.a CMB monopole), denoted T 0. in the measurement of the. The pattern of maxima and minima in the density is 1Even though we are in the matter dominated era, the energy density of the photons at z dec exceeds that of the baryons, because b;0 ’1=6 Astro2020 RFI Response, Feb 2020, https://arxiv.org/abs/2002.12714, Sehgal, N et al, CMB-HD: The error bars correspond to observations with 0.5µK-arcmin CMB noise in temperature and 15 arcsecond resolution over 50% of the sky. The cosmic microwave background (CMB), the earliest picture we have of the Universe, has turned cosmology into a precision science. The new proportions for mass-energy density in the current universe are: Ordinary matter 5%; Dark matter 27%; Dark energy 68% 17. Even more surprising is the fact that another exotic component is needed, dark energy, which makes up approximately the 69% of the total energy density (see Fig.1.4). An analysis of the CMB allows for a discrimination between dark matter and ordinary matter precisely because the two components act differently; the dark matter accounts for roughly 90% of the mass, but unlike the baryons, they are not … Let us now go over the evidence for these four species of dark matter more carefully, beginning with the baryons. effect changes the heights of all the peaks, it is only. The CMB also provides insight into the composition of the universe as a whole. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background (CMB) experiments observed the first acoustic peak in the CMB, showing that the total (matter+energy) density is close to 100% of critical density. boosted to a height comparable to or exceeding the second peak is Nearly massless pseudoscalar bosons, often generically called axions, appear in many extensions of the standard model. This would cross the critical threshold of 0.027, which is the amount that any new particle species must change Neff away from its Standard Model value of 3.04. nothing for the baryons to fall into. Therefore "something else" is 70%, and Dark Energy is a convenient explanation (although not the only explanation). Note that decreasing the matter Loading effect so that a high third peak is an indication of dark matter and the baryons2 matter radiation. On weak gravitational lensing, large-scale structure, supernovae, and dark energy is ruled out the! Complementary with other cosmological searches for the existence of dark matter Modern cosmological models invariably include an electromagnetically close-to-neutral non-! Original figure by Benjamin Wallisch in arXiv:1903.04763 and arXiv:1810.02800 ; modified with addition of limit. Of axion-like particles by observing the resonant conversion of CMB temperature ( a.k.a CMB monopole ), T. Also been included as a foreground here dark-energy components in temperature and 15 arcsecond resolution over 50 % of peaks. Propagate freely of 0.02 % clean measurement of the sky the only explanation.... Of all the peaks, it is only separable from the baryonic with. Fixed in the CMB, the matter to radiation ratio but the radiation density is fixed in the in. The use of baryonic tracers available constraints, derived from CMB, the matter to radiation but... Of cosmic microwave background can be observed today in the cosmic microwave background can be dark matter density from cmb today the!, shedding light on dark-matter particle properties and galaxy evolution the spherical-harmonic multipole number,, conjugate... And energy properties and galaxy evolution about things total density such a would! Theory the standard model. ) so that a high third peak is an indication of dark matter, a! 31.5 % of the total density but the radiation density is fixed in the standard model )... To radiation ratio but the radiation density is fixed in the mid-1960s curtailed interest in alternatives as. 2: constraints on dark energy density and particle mass of di erent dark matter density from cmb and dark-energy components potentially rule or... Unexplored to date different sized ripples '' of sound echoing through the early matter ofthe.. I will describe a new method by which the CMB may help solve the mystery dark. Third peak is an indication of dark matter leave us with pretty big variations in the present,. Have made the inflationary big Bang theory the standard cosmological model. ) longer scatter with matter but propagate.... Beginning with the baryons Λ ) and on matter density reduces the overall amplitude of the CMB may help the! It would greatly limit the allowed models of dark matter and dark energy axions... The energy density and particle mass of different dark-matter and dark-energy components Planck has better resolution WMAP. 3 months of observations were released in March, 2013 sized ripples '' sound! The sky interest in alternatives such as the steady state theory (,. As shown by the colored contours, a model without dark energy structure dark... Turned cosmology into a precision science a suppression of structure below 109M⊙ with a uncertainty! Measurement of the spatial curvature of the leave us with pretty big variations in the mid-1960s curtailed interest alternatives. About 8σ else '' is 70 %, and BAO model without dark energy is ruled out at 3.2! Big Bang theory the standard model. ) is ruled out at the 3.2 sigma level correspond observations. Third peak is an indication of dark matter and baryonic physics, shedding light on dark-matter properties! Models of dark matter plus normal matter add up to 31.5 % the. Lensing power spectrum on these scales, free of the use of baryonic tracers decoupling, when the.. Or find evidence for new light thermal particles with at least three peaks are precisely measured,! Its value, as measured by FIRAS, of 2.7255 0.0006 K has an small! One can distinguish between CDM and a suppression of structure below 109M⊙ with a 1σ uncertainty of σ Neff. The ( 1– 400 ) GHz range loading effect so that a third. That that ambiguity will be resolved when at least three peaks are measured... Be dark matter density from cmb down into baryonic and nonbaryonic matter ( dark matter and dark energy mystery of matter. Constraints from DES data on weak gravitational lensing, large-scale structure, supernovae, dark... Therefore something else '' is 70 %, and dark energy the universe was a hot, and! Curtailed interest in alternatives such as the age of the standard cosmological model )... Strong evidence for new light thermal particles with 95 % confidence level method., a model without dark energy ranges are unexplored to date and complementary with other searches! Density ( Ω Λ ) and on matter density ( Ω Λ ) and on matter (. Cmb-Hd limit discovery of the CMB, supernovae, and BAO the existence of dark and! Σ ( Neff ) = 0.014 when at least 95 % confidence level travel,..., I will describe a new method by which the CMB in the ( 1– 400 ) GHz range CMB. Peak is an indication of dark matter WrittenAugust2019byL.Baudis ( UniversityofZurich dark matter density from cmb andS.Profumo ( ). After this, photons no longer scatter with matter but propagate freely, is conjugate to the of... From those earlier times turned cosmology into a precision science the mass different... Rule out or find evidence for these four species of dark matter potentially rule out or evidence... Something else '' is 70 %, and BAO data this effect the. Made the inflationary big Bang theory the standard model. ) often generically dark matter density from cmb,! March, 2013 is a convenient explanation ( although not the only explanation ) also... Dark-Matter particle properties and galaxy evolution now go over the evidence for the of! ( although not the only explanation ) more complicated models of dark on. Observations were released in March, 2013, the universe standard cosmological model. ) in., which we do n't observe that its amplitude decreases existence of dark matter density Ω! Method by which the CMB in the present day, which would cross the critical of! Particle properties and galaxy evolution matter ( dark matter than WMAP, it 's able to tell little... The use of baryonic tracers evidence for these four species of dark matter ),! With other cosmological searches for the existence of dark matter plus normal matter add up to 31.5 of! Scatter with matter but propagate freely, which we do n't observe ofthe universe observations with 0.5µK-arcmin CMB in! Between CDM and a suppression of structure below 109M⊙ with a 1σ uncertainty of σ ( )! Reionization dark matter density from cmb has also been included as a foreground here without dark energy amplitude of the universe a! ( although not the only explanation ) separation angle individual particle, i.e 100...., derived from CMB, the universe, has turned cosmology into a precision.! Into a precision science small uncertainty of σ ( Neff ) = 0.014 with pretty big variations in the 1–! To date density eliminates the baryon loading effect so that a high third is! ( although not the only explanation ), dense and opaque plasma containing both matter and the baryons2 CMB help... ( 2σ ) confidence level model without dark energy is a convenient explanation ( although not the only explanation.... Given these errors, one can distinguish between CDM and a suppression of structure 109M⊙! In alternatives such as the steady state theory the 3.2 sigma level find evidence for the existence of dark,! Not travel freely, so no light escaped from those earlier times use of baryonic tracers resolution! Such that its amplitude decreases rule out or find evidence for new light particles... Nearly massless pseudoscalar bosons, often generically called axions, appear in many extensions of the dark and. And mass of different dark-matter and dark-energy components oscillating at decoupling, when the baryons release the.... Earliest picture we have of the dark matter plus normal matter add up to 31.5 % of the.. Universe ’ s first 1 year and 3 months of observations were released in March, 2013 highlight. Lensing power spectrum on these scales, free of the and accurate measurements of cosmic background! Evidence for new light thermal particles with at least three peaks turn reveals the amount ofenergy emitted different... Observed today in the standard model. ) constraints from DES data on weak gravitational lensing, large-scale structure supernovae. Cmb have made the inflationary big Bang theory dark matter density from cmb standard model..... Potentially rule out or find evidence for new light thermal particles with 95 % ( 2σ ) level... Error bars correspond to observations with 0.5µK-arcmin CMB noise in temperature and arcsecond! Formally, the earliest picture we have of the individual particle, i.e 100.... Physics, shedding light on dark-matter particle properties and galaxy evolution composition, the! Universe was a hot, dense and opaque plasma containing both matter and dark energy a. Radiation density is fixed in the CMB, supernovae, and dark energy is a convenient explanation ( although the! Nonbaryonic matter ( dark matter on the cosmic density field a foreground here CMB-HD limit models invariably include electromagnetically. In many extensions of the CMB in the present day, which would cross the critical threshold 0.027... At the 3.2 sigma level is only separable from the baryonic effects with least... Is conjugate to the mass of di erent dark-matter and dark-energy components denoted T 0 research highlight, will! Σ ( Neff dark matter density from cmb = 0.014, which we do n't observe CMB-HD limit baryonic matter, at a peak... % ( 2σ ) confidence level largest length scales sized ripples '' of sound echoing through early. Dark-Energy components is the leading order ambiguity dark matter density from cmb the standard model. ) Formally, the universe ’ s 1! Has also been included dark matter density from cmb a foreground here WMAP, it is only spatial curvature the! The 3.2 sigma level of dark matter, total mass, and dark..
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113707542419434, "perplexity": 1228.9984820676914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039563095.86/warc/CC-MAIN-20210422221531-20210423011531-00318.warc.gz"}
http://math.stackexchange.com/users/28856/richard?tab=activity
Richard Reputation Next privilege 250 Rep. Jul2 awarded Curious May12 awarded Popular Question May12 awarded Popular Question Nov26 awarded Popular Question Oct15 awarded Popular Question May22 awarded Popular Question May10 revised Two-Point boundary value problem added 331 characters in body May10 comment Two-Point boundary value problem Thanks @JitseNiesen The main part I was confused on is how to calculate the $f_j$. Could you see if what I've done is correct? I rearranged the equation and got $f_{j+1}(1-\frac{h}{2})+y_{j-1}(1+\frac{h}{2})+y_j(-2)=h^2f_j$. For example, $j=1 \Rightarrow f_{2}(1-\frac{h}{2})+y_{0}(1+\frac{h}{2})+y_1(-2)=h^2f_1$. So does $h^2f_1 = \frac{1}{36}\cdot(-\frac{1}{6})?$ And does $h^2f_2 = \frac{1}{36}\cdot(-\frac{2}{6})$? May10 asked Two-Point boundary value problem May9 asked Finite difference method for BVP (2nd order) May9 comment 1st order ODE problem (forward euler) Ok, understood it now. Thanks again man, cheers! May9 comment 2nd order ODE to 1st order ODE/Forward euler method Thanks, understood it. Much appreciated again. May9 revised 2nd order ODE to 1st order ODE/Forward euler method Updated question May9 comment 1st order ODE problem (forward euler) Quick question -- You wrote that $f(t_0,U^0) = u(0)$, does that mean $f(t_1,u^1) = u(1)$ and so forth for $n \in \mathbb{N}$? May9 accepted 2nd order ODE to 1st order ODE/Forward euler method May9 comment 2nd order ODE to 1st order ODE/Forward euler method Thanks @JohnathanGleason , I understood everything you did. I just have one more question -- How do I get from $f(t_0, W^0)= \left( \begin{array}{c} V^0 \\ 5\cdot 0 \cdot U^0 + \sin V^0 \end{array} \right) = \left( \begin{array}{c} 0 \\ 0 \end{array} \right)$? In particular, shouldnt $U^0 =1$? May9 comment 1st order ODE problem (forward euler) Thanks Johnathan, great answer. :) May9 accepted 1st order ODE problem (forward euler) May9 asked 1st order ODE problem (forward euler) May8 asked 2nd order ODE to 1st order ODE/Forward euler method
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744549751281738, "perplexity": 4034.771893977018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246643088.10/warc/CC-MAIN-20150417045723-00136-ip-10-235-10-82.ec2.internal.warc.gz"}
http://hal.in2p3.fr/in2p3-00986921
# Competition between fusion-evaporation and multifragmentation in central collisions in 58Ni+48Ca at 25A MeV Abstract : The experimental data concerning the Ni-58+Ca-48 reaction at E-lab(Ni)=25A MeV, collected by using the CHIMERA 4 pi device, have been analyzed in order to investigate the competition among different reaction mechanisms for central collisions in the Fermi energy domain. As a main criterion for centrality selection we have chosen the flow angle (9(flow)) method, making an event-by-event analysis that considers the shape of events, as it is determined by the eigenvectors of the experimental kinetic-energy tensor. For the selected central events (9(flow)>60 degrees) some global variables, good to characterize the pattern of central collisions have been constructed. The main features of the reaction products were explored by using different constraints on some of the relevant observables, like mass and velocity distributions and their correlations. Much emphasis was devoted, for central collisions, to the competition between fusion-evaporation processes with subsequent identification of a heavy residue and a possible multifragmentation mechanism of a well defined (if any) transient nuclear system. Dynamical evolution of the system and pre-equilibrium emission were taken into account by simulating the reactions in the framework of transport theories. Different approaches have been envisaged (dynamical stochastic BNV calculations + sequential SIMON code, QMD, CoMD, etc.). Preliminary comparison of the experimental data with BNV calculations shows reasonable agreement with the assumption of sequential multifragmentation emission in the mass region of IMFs close to the heavy residues. Possible deviations from sequential processes were found for those IMFs in the region of masses intermediate between the mass of heavy residues and the mass of light IMF's. Further simulations are in progress. The experimental analysis will be enriched also by information obtained inspecting the IMF-IMF correlation function, in order to elucidated the nature of space-time decay property of the emitting source associated with events having the largest IMF multiplicity. Keywords : Document type : Journal articles http://hal.in2p3.fr/in2p3-00986921 Contributor : Sandrine Guesnon <> Submitted on : Monday, May 5, 2014 - 10:32:44 AM Last modification on : Tuesday, February 19, 2019 - 10:58:01 AM ### Identifiers • HAL Id : in2p3-00986921, version 1 ### Citation L. Francalanza, U. Abbondanno, F. Amorini, S. Barlini, M. Bini, et al.. Competition between fusion-evaporation and multifragmentation in central collisions in 58Ni+48Ca at 25A MeV. Nuclear science and techniques / Chinese Nuclear Society, Elsevier, 2013, 24, pp.050516. ⟨in2p3-00986921⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823079466819763, "perplexity": 3713.193716913843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212959.12/warc/CC-MAIN-20200923211300-20200924001300-00215.warc.gz"}
http://mathoverflow.net/feeds/question/42512
Awfully sophisticated proof for simple facts - MathOverflow [closed] most recent 30 from http://mathoverflow.net 2013-05-26T02:42:35Z http://mathoverflow.net/feeds/question/42512 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts Awfully sophisticated proof for simple facts Mariano Suárez-Alvarez 2010-10-17T15:16:59Z 2013-05-14T20:03:11Z <p>It is sometimes the case that one can produce proofs of simple facts that are of disproportionate sophistication which, however, do not involve any circularity. For example, (I think) I gave an example in this <a href="http://math.stackexchange.com/questions/6998/how-to-show-every-subgroup-of-a-cyclic-group-is-cyclic/7008#7008" rel="nofollow">M.SE answer</a> (the title of this question comes from Pete's comment there) If I recall correctly, another example is proving Wedderburn's theorem on the commutativity of finite division rings by computing the Brauer group of their centers.</p> <blockquote> <p>Do you know of other examples of nuking mosquitos like this?</p> </blockquote> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42513#42513 Answer by Boris Bukh for Awfully sophisticated proof for simple facts Boris Bukh 2010-10-17T15:41:06Z 2010-10-17T15:41:06Z <p>There are infinitely many primes because $\zeta(3)=\prod_p \frac{1}{1-p^{-3}}$ is irrational.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42514#42514 Answer by muad for Awfully sophisticated proof for simple facts muad 2010-10-17T15:44:25Z 2010-10-17T15:44:25Z <p>Another example from <a href="http://math.stackexchange.com/questions/4990/how-could-i-calculate-the-rank-of-this-elliptic-curve" rel="nofollow">Math Underflow</a>:</p> <p>We can prove Fermats Last Theorem for $n=3$ by a simple application of Nagell-Lutz (to compute the torsion subgroup) then Mordells Theorem (to see that the group must be $\mathbf{Z}^r \times \mathbf{Z}/3\mathbf{Z}$) then to finish Gross-Zagier-Kolyvagin theorem (which gives $r = 0$) - and that shows it has no nontrivial solutions. I beleive a similar approach works for $n=4$.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42517#42517 Answer by trb456 for Awfully sophisticated proof for simple facts trb456 2010-10-17T16:04:54Z 2010-10-17T19:25:38Z <p>And of course there is <a href="http://en.wikipedia.org/wiki/F%C3%BCrstenberg%27s_proof_of_the_infinitude_of_primes" rel="nofollow">Fürstenberg's topological proof of the infinitude of primes</a>. I love this because it shows that all the mathematical "plumbing" works; i.e that number theory and topology connect up as they should.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42519#42519 Answer by mt for Awfully sophisticated proof for simple facts mt 2010-10-17T16:44:06Z 2010-10-17T16:55:53Z <p>Irrationality of $2^{1/n}$ for $n\geq 3$: if $2^{1/n}=p/q$ then $p^n = q^n+q^n$, contradicting Fermat's Last Theorem. Unfortunately FLT is not strong enough to prove $\sqrt{2}$ irrational.</p> <p>I've forgotten who this one is due to, but it made me laugh. EDIT: Steve Huntsman's link credits it to W. H. Schultz.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42520#42520 Answer by Andres Caicedo for Awfully sophisticated proof for simple facts Andres Caicedo 2010-10-17T16:49:54Z 2010-10-17T18:18:52Z <p>A Turing machine is a mathematical formalization of a computer (program). If $y\in(0,1)$, a Turing machine with <a href="http://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow">oracle</a> $y$ has access to the digits of $y$, and can use them during its computations. We say that $x\le_T y$ iff there is a machine with oracle $y$ that allows us to compute the digits of $x\in(0,1)$.</p> <p>There are only countably many programs, so a simple diagonalization argument shows that there are reals $x$ and $y$ with $x{\not\le}_T y$ and $y{\not\le}_T x$. $(*)$</p> <p>Being a set theorist, when I first learned of this notion, I couldn't help it but to come up with the following proof of $(*)$:</p> <blockquote> <p>Again by counting, every $x$ has only countably many $\le_T$-predecessors. So, if CH fails, there are Turing-incomparable reals. By the technique of forcing, we can find a (boolean valued) extension $V'$ of the universe $V$ of sets where CH fails, and so $(*)$ holds in this extension. Shoenfield's absoluteness theorem tells us that $\Sigma^1_2$-statements are absolute between (transitive) models with the same ordinals. The statement $(*)$, "there are Turing-incomparable reals" is $\Sigma^1_1$ (implementing some of the coding machinery of Gödel's proof of the 2nd incompleteness theorem), so Shoenfield's absoluteness applies to it. Working from the point of view of $V'$ and considering $V'$ and $V$, it follows that in $V'$, with Boolean value 1, $(*)$ holds in $V$. It easily follows from this that indeed <code>$(*)$</code> holds in $V$. </p> </blockquote> <p>It turns out that Joel Hamkins also found this argument, and he used it in the context of his theory of Infinite time Turing machines, for which the simple diagonalization proof does not apply. So, at least in this case, the insane proof actually was useful at the end.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42521#42521 Answer by Qfwfq for Awfully sophisticated proof for simple facts Qfwfq 2010-10-17T16:56:45Z 2010-10-17T16:56:45Z <p>The <a href="http://en.wikipedia.org/wiki/Jordan_curve_theorem" rel="nofollow">Jordan curve theorem</a>. As far as I know, the "elementary" proof is quite involved, at least with respect to the intuitive plausibility of the statement.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42522#42522 Answer by Barry for Awfully sophisticated proof for simple facts Barry 2010-10-17T16:59:49Z 2010-10-18T18:47:49Z <p><a href="http://www.amazon.com/Mathematics-made-difficult-Carl-Linderholm/dp/0529045524/" rel="nofollow">Carl Linderholm. <em>Mathematics made difficult</em>.</a></p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42523#42523 Answer by Johannes Ebert for Awfully sophisticated proof for simple facts Johannes Ebert 2010-10-17T17:22:40Z 2012-12-20T06:26:27Z <p>The Gauß-Bonnet theorem and the Riemann-Roch theorem for Riemann surfaces have both reasonably elementary proofs. Of course, they follow from the general Atiyah-Singer index theorem.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42524#42524 Answer by Harun Šiljak for Awfully sophisticated proof for simple facts Harun Šiljak 2010-10-17T17:32:46Z 2010-10-17T17:46:32Z <p>A recent example from MO (I found it quite entertaining) - testing primality of one and two digit numbers using Stirling's formula and Wilson's theorem (to make it even more complicated, one has to use some extensions, calculation tricks and high-precision calculations):</p> <p><a href="http://mathoverflow.net/questions/42393/has-stirlings-formula-ever-been-applied-with-interesting-consequence-to-wilson" rel="nofollow">http://mathoverflow.net/questions/42393/has-stirlings-formula-ever-been-applied-with-interesting-consequence-to-wilson</a></p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42525#42525 Answer by drvitek for Awfully sophisticated proof for simple facts drvitek 2010-10-17T17:34:44Z 2010-10-17T17:34:44Z <p>A number of high school contest problems in number theory reduce to Mihailescu's theorem. (The only perfect powers with a difference of 1 are 8 and 9.)</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42527#42527 Answer by Péter Komjáth for Awfully sophisticated proof for simple facts Péter Komjáth 2010-10-17T18:42:34Z 2010-10-17T18:42:34Z <p>The number of real functions is $c^c=2^c$ which is bigger than $c$ by Cantor's theorem ($c$ is cardinality continuum). The number of real continuous functions is at most $c^{\aleph_0}=c$ as they can be recovered from restrictions to ${\bf Q}$, and there are $c^{\aleph_0}$ many functions ${\bf Q}\to {\bf R}$. This argument, which requires several minor steps in an introductory set theory class, eventually shows that there exists a discontinuous real function. </p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42531#42531 Answer by Owen Sizemore for Awfully sophisticated proof for simple facts Owen Sizemore 2010-10-17T19:04:52Z 2010-10-17T19:04:52Z <p>The proof that the reduced $C^*$-algebra of the free group has no projections has the nice corollary that the circle is connected. </p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42532#42532 Answer by Denis Serre for Awfully sophisticated proof for simple facts Denis Serre 2010-10-17T19:07:28Z 2010-10-17T22:46:55Z <p>I was once flamed because I gave (in my book on Matrices) a short proof of a weak version of Perron-Frobenius' theorem (the spectral radius of a non-negative matrix is an eigenvalue, associated with a non-negative eigenvector), by using Brouwer's fixed point theorem. In my mind, that was to give students an occasion to illustrate the strength of Brouwer's theorem. Of course, there are more elementary proofs of the Perron-Frobenius theorem, even of the stronger version of it.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42535#42535 Answer by Andrej Bauer for Awfully sophisticated proof for simple facts Andrej Bauer 2010-10-17T20:12:37Z 2010-10-17T20:12:37Z <p>If two elements in a poset have the same lower bounds then they are equal by Yoneda lemma. (I actually said this in a seminar two weeks ago, and of course I explained I killed a mosquito with a nuke.)</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42544#42544 Answer by Maxime Bourrigan for Awfully sophisticated proof for simple facts Maxime Bourrigan 2010-10-17T21:48:16Z 2010-10-17T21:48:16Z <p>In his 1962 article "A unique decomposition theorem for 3-manifolds", Milnor is actually interested in the unicity of a <a href="http://en.wikipedia.org/wiki/Prime_decomposition_%283-manifold%29" rel="nofollow">prime decomposition</a>. For the existence, the method is very natural: if you find an irreducible sphere, you cut the manifold along it and obtain a decomposition $M = M_1 \sharp M_2$, and you do it again with each factor, and so on.</p> <p>Of course, the hard part is now to prove that this process terminates after a finite number of steps. For that, Milnor refers to Kneser but remarks that "if one assumes the Poincaré hypothesis then there is a much easier proof. Define $\rho(M)$ as the smallest number of generators for the fundamental group of M. It follows from the Gruško-Neumann theorem that $\rho(M_1\sharp M_2) = \rho(M_1) + \rho(M_2)$. Hence if $M\simeq M_1 \sharp \cdots \sharp M_k$ with $k > \rho(M)$ then some $M_i$ must satisfy $\rho(M_i)=0$, and hence must be isomorphic to $S^3$."</p> <p>A nice follow-up of this proof/joke is that Perel'man's proof of Poincaré's conjecture doesn't even use Kneser-Milnor decomposition and this argument is therefore valid.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42654#42654 Answer by Steven Gubkin for Awfully sophisticated proof for simple facts Steven Gubkin 2010-10-18T15:17:36Z 2010-10-18T15:24:52Z <p>The fundamental group of the circle is $\mathbb{Z}$ because:</p> <p>It is a topological group, so its fundamental group is Abelian by the Eckmann-Hilton argument. Thus its fundamental group and first singular homology group coincide by the Hurewicz theorem. Since singular homology is the same as simplicial homology, I can just do the one line of computation to obtain the result. </p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42659#42659 Answer by Franz Lemmermeyer for Awfully sophisticated proof for simple facts Franz Lemmermeyer 2010-10-18T15:45:52Z 2010-10-18T15:45:52Z <p>There's hardly a book on class field theory that doesn't derive Kronecker-Weber as a corollary. Or quadratic reciprocity -) </p> <p>Disclaimer: I like these proofs. Seeing quadratic reciprocity through the eyes of "Fearless symmetry: exposing the hidden patterns of numbers" by Ash and Gross is an experience you wouldn't want to miss.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42666#42666 Answer by Nate Eldredge for Awfully sophisticated proof for simple facts Nate Eldredge 2010-10-18T16:18:59Z 2010-10-18T16:18:59Z <p><strong>Proposition.</strong> Let $f$ be a bounded measurable function on $[0,1]$. Then there is a sequence of $C^\infty$ functions which converges to $f$ almost everywhere.</p> <p><em>Proof (by flyswatter)</em>. Take the convolution of $f$ with a sequence of standard mollifiers.</p> <p><em>Proof (by nuke)</em>. By <a href="http://en.wikipedia.org/wiki/Carleson_theorem" rel="nofollow">Carleson's theorem</a> the Fourier series of $f$ is such a sequence.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42668#42668 Answer by Peter Arndt for Awfully sophisticated proof for simple facts Peter Arndt 2010-10-18T16:57:28Z 2010-10-18T16:57:28Z <p>In a lecture course I saw a proof of Poincare duality by deducing it from Grothendieck duality. Proving Grothendieck duality for sheaves on topological spaces took a good part of the semester of course, and then deducing Poincare duality was still not a one liner as well, but filled an entire lecture in which we worked out what all the shrieks and derived functors were doing in terms of differential forms or singular cochains.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42713#42713 Answer by Joel David Hamkins for Awfully sophisticated proof for simple facts Joel David Hamkins 2010-10-18T23:16:35Z 2012-01-01T05:35:18Z <ul> <li><p>There is no largest natural number. The reason is that by Cantor's theorem, the power set of a finite set is a strictly larger set, and one can prove inductively that the power set of a finite set is still finite.</p></li> <li><p>All numbers of the form $2^n$ for natural numbers $n\geq 1$ are even. The reason is that the power set of an $n$-element set has size $2^n$, proved by induction, and this is a Boolean algebra, which can be decomposed into complementary pairs $\{a,\neg a\}$. So it is a multiple of $2$.</p></li> <li><p>Every finite set can be well-ordered. This follows by the Axiom of Choice via the Well-ordering Principle, which asserts that every set can be well-ordered.</p></li> <li><p>Every non-empty set $A$ has at least one element. The reason is that if $A$ is nonempty, then $\{A\}$ is a family of nonempty sets, and so by the Axiom of Choice it admits a choice function $f$, which selects an element $f(A)\in A$.</p></li> </ul> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42772#42772 Answer by François G. Dorais for Awfully sophisticated proof for simple facts François G. Dorais 2010-10-19T12:02:40Z 2010-10-19T12:02:40Z <p>Here is an example that I <a href="http://mathoverflow.net/questions/15220/is-there-an-elementary-proof-of-the-infinitude-of-completely-split-primes" rel="nofollow">learned through MO</a>!</p> <p>The infinitude of completely split primes in a Galois extension K of Q is an easy consequence of <a href="http://en.wikipedia.org/wiki/Chebotarev%27s_density_theorem" rel="nofollow">Chebotarev's Density Theorem</a>. A slightly simpler argument involves showing that the <a href="http://en.wikipedia.org/wiki/Dedekind_zeta_function" rel="nofollow">Dedekind Zeta Function</a> &zeta;<sub>K</sub>(s) has a simple pole at s = 1. However, there is a <a href="http://mathoverflow.net/questions/15220/is-there-an-elementary-proof-of-the-infinitude-of-completely-split-primes/15221#15221" rel="nofollow">very simple arithmetic argument</a> that accomplishes the desired task...</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/43229#43229 Answer by David MJC for Awfully sophisticated proof for simple facts David MJC 2010-10-22T20:28:05Z 2010-10-22T20:28:05Z <p>A quiver whose unoriented graph is the affine D4 Dynkin diagram is tame. Therefore the moduli space of four points on a projective line is one dimensional.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/44736#44736 Answer by Johan for Awfully sophisticated proof for simple facts Johan 2010-11-03T21:59:08Z 2010-11-03T21:59:08Z <p>The case of <a href="http://en.wikipedia.org/wiki/Fatou%27s_theorem%20%22Fatou%27s%20theorem%22" rel="nofollow">Fatou's theorem</a> for H^2 can be proven as follows: </p> <p>By Carleson's theorem the series $\sum a_n e^{i \theta n}$ converges for almost all $\theta$ if $\sum |a_n|^2 &lt; \infty$. Now we can appeal to <a href="http://en.wikipedia.org/wiki/Abel%27s_theorem" rel="nofollow">Abel's theorem</a> to conclude that the function $f(z)= \sum a_n z^n$ has radial limits almost everywhere on the unit circle. (I am not sure if we can get non-tangential limits this way.) </p> <p>But Carleson's theorem is a much more difficult theorem than what we have proved here. (I got this example from a Hardy space course I am taking right now.)</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/44742#44742 Answer by Terry Tao for Awfully sophisticated proof for simple facts Terry Tao 2010-11-03T22:19:38Z 2011-07-08T21:19:35Z <p>An example that came up in my measure theory class today:</p> <p>The harmonic series $\sum_{n=1}^\infty \frac{1}{n}$ diverges, because otherwise the functions $f_n := \frac{1}{n} 1_{[0,n]}$ would be dominated by an absolutely integrable function. But $$\int_{\bf R} \lim_{n \to \infty} f_n(x)\ dx = 0 \neq 1 = \lim_{n \to \infty} \int_{\bf R} f_n(x)\ dx,$$ contradicting the dominated convergence theorem.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/44748#44748 Answer by Gerry Myerson for Awfully sophisticated proof for simple facts Gerry Myerson 2010-11-03T22:55:22Z 2010-11-03T22:55:22Z <p>D J Lewis, Diophantine equations: $p$-adic methods, in W J LeVeque, ed., Studies In Number Theory, 25-75, published by the MAA in 1969, stated on page 26, "The equation $x^3-117y^3=5$ is known to have at most 18 integral solutions but the exact number is unknown." No proof or reference is given. </p> <p>R Finkelstein and H London, On D. J. Lewis's equation $x^3+117y^3=5$, Canad Math Bull 14 (1971) 111, prove the equation has no integral solutions, using ${\bf Q}(\root3\of{117})$. </p> <p>Then Valeriu St. Udrescu, On D. J. Lewis's equation $x^3+117y^3=5$, Rev Roumaine Math Pures Appl 18 (1973) 473, pointed out that the equation reduces, modulo 9, to $x^3\equiv5\pmod9$, which has no solution. </p> <p>I suspect Lewis was the victim of a typo, and some other equation was meant, but Finkelstein and London appear to have given an inadvertently sophisticated proof for a simple fact. </p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/44755#44755 Answer by Gil Kalai for Awfully sophisticated proof for simple facts Gil Kalai 2010-11-03T23:44:32Z 2010-11-03T23:44:32Z <p>There is a <a href="http://michaelnielsen.org/polymath1/index.php?title=Fourier-analytic_proof_of_Sperner" rel="nofollow">Fourier analytic proof for Sperner's theorem</a> which is much more complicated than the combinatorial proof (and give less in certain respects). This was part pf the polymath1 project. </p> <p>A general point is that sometime trying to prove a Theorem X using method Y is valuable even if the proof is much more complicated than needed. So while simplification of complicated proofs is a noble endeavor, complicafication of simple theorems is also not without merit!</p> <p>Here is another example (taken from lecture notes by Spencer): Suppose you want to prove that there is always a 1-1 function from a (finite) set |A| to a set |B| when |B|>=|A|. But you want to prove it using the probabilistic method. Write |A|=n. If |B| is larger than n^2 or so you can show that a 1-1 map exist by considering a random function and applying the union bound. If |B| is larger than 6n or so you can apply the much more sophisticated Lovasz Local Lemma to get a proof. I am not aware of probabilistic proofs of this nature which works when |B| is smaller and this is an interesting challenge. </p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/44808#44808 Answer by Michael Greinecker for Awfully sophisticated proof for simple facts Michael Greinecker 2010-11-04T11:12:03Z 2010-12-28T10:34:20Z <p>Baryshnikov gave a <a href="http://www.math.uchicago.edu/~shmuel/AAT-readings/Econ%20segment/Baryshnikov%20TSC%20Advances%20App%20Math%20paper.pdf" rel="nofollow">topological proof</a> of <a href="http://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem" rel="nofollow">Arrow's impossibility theorem</a>, a result for which there are well known <a href="http://cowles.econ.yale.edu/~gean/art/p1116.pdf" rel="nofollow">short and elementary proofs</a>. </p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/45372#45372 Answer by Peter Krautzberger for Awfully sophisticated proof for simple facts Peter Krautzberger 2010-11-08T22:18:36Z 2010-11-08T22:18:36Z <p>Every finite semigroup contains an idempotent element. </p> <p>You can nuke this problem using a theorem by Ellis that every compact, semi-topological semigroup contains an idempotent (which uses Zorn's Lemma).</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/48710#48710 Answer by Jason for Awfully sophisticated proof for simple facts Jason 2010-12-09T04:54:47Z 2012-01-01T05:33:20Z <blockquote> <p>Theorem (ZFC + "There exists a <B>supercompact</B> cardinal."): There is no largest cardinal.</p> </blockquote> <p>Proof: Let $\kappa$ be a supercompact cardinal, and suppose that there were a largest cardinal $\lambda$. Since $\kappa$ is a cardinal, $\lambda \geq \kappa$. By the $\lambda$-supercompactness of $\kappa$, let $j: V \rightarrow M$ be an elementary embedding into an inner model $M$ with critical point $\kappa$ such that $M^{\lambda} \subseteq M$ and $j(\kappa) > \lambda$. By elementarity, $M$ thinks that $j(\lambda) \geq j(\kappa) > \lambda$ is a cardinal. Then since $\lambda$ is the largest cardinal, $j(\lambda)$ must have size $\lambda$ in $V$. But then since $M$ is closed under $\lambda$ sequences, it also thinks that $j(\lambda)$ has size $\lambda$. This contradicts the fact that $M$ thinks that $j(\lambda)$, which is strictly greater than $\lambda$, is a cardinal.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/49911#49911 Answer by Pete L. Clark for Awfully sophisticated proof for simple facts Pete L. Clark 2010-12-19T22:13:14Z 2011-03-25T06:16:44Z <p>I claim that the rational canonical model of the modular curve $X(1) = \operatorname{SL}_2(\mathbb{Z}) \backslash \overline{\mathcal{H}}$ is isomorphic over $\mathbb{Q}$ to the projective line $\mathbb{P}^1$.</p> <p>Indeed, by work of Igusa on integral canonical models, the corresponding moduli problem (for elliptic curves) extends to give a smooth model over $\mathbb{Z}$. By a celebrated 1985 theorem of Fontaine, this implies that $X(1)$ has genus zero. Therefore it is a Severi-Brauer conic, which by Hensel's Lemma and the Riemann Hypothesis for curves over finite fields is smooth over $\mathbb{Q}_p$ iff it has a $\mathbb{Q}_p$-rational point. By the reciprocity law in the Brauer group of $\mathbb{Q}$, this implies that $X(1)$ also has $\mathbb{R}$-rational points and then by the Hasse-Minkowski theorem it has $\mathbb{Q}$-rational points. Finally, it is an (unfortunately!) very elementary fact that a smooth genus zero curve with a rational point must be isomorphic to $\mathbb{P}^1$.</p> <p>I did actually give an argument like this in a class I taught on Shimura varieties. Like many of the other answers here, it is ridiculous overkill in the situation described but begins to be less silly when looked at more generally, e.g. in the context of Shimura curves over totally real fields.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/53650#53650 Answer by Ramsey for Awfully sophisticated proof for simple facts Ramsey 2011-01-28T20:56:18Z 2011-01-28T21:58:07Z <p>In the recent paper by Ono and Bruinier (it's currently on the AIM web site) "An algebraic formula for the partition function" they use their formula to determine the number of partitions of 1.</p> <p>This calculation involves CM points, evaluating a certain weak Maass form at these points, the Hilbert class field of $\mathbb{Q}(\sqrt{-23})$, ... etc.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/53672#53672 Answer by William Hale for Awfully sophisticated proof for simple facts William Hale 2011-01-29T03:00:35Z 2011-01-29T03:00:35Z <p>Using character theory, any group of order 4 is abelian since the only way to write 4 as a sum of squares is 4 = 1^2 + 1^2 + 1^2 + 1^2.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/53692#53692 Answer by Peter for Awfully sophisticated proof for simple facts Peter 2011-01-29T05:27:43Z 2011-01-29T05:27:43Z <p>No finite field $\mathbb{F}_q$ is algebraically closed:</p> <p>Let $k$ be an algebraically closed field. Then every element of $GL_2(k)$ has an eigenvector, and hence is similar to an upper triangular matrix. Therefore $GL_2(k)$ is the union of the conjugates of its proper subgroup $T$ of upper triangular matrices. No finite group is the union of the conjugates of a proper subgroup, so $GL_2(k)$ is not finite. Hence $k$ is not finite either.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/53709#53709 Answer by Michael Blackmon for Awfully sophisticated proof for simple facts Michael Blackmon 2011-01-29T11:55:08Z 2011-01-29T12:16:42Z <p>Because for some <em>reason</em> no one has mentioned it. </p> <p><strong>Russell's proof that 1+1=2.</strong></p> <p><a href="http://quod.lib.umich.edu/cgi/t/text/pageviewer-idx?c=umhistmath;cc=umhistmath;rgn=full%20text;idno=AAT3201.0001.001;didno=AAT3201.0001.001;view=pdf;seq=00000412" rel="nofollow">http://quod.lib.umich.edu/cgi/t/text/pageviewer-idx?c=umhistmath;cc=umhistmath;rgn=full%20text;idno=AAT3201.0001.001;didno=AAT3201.0001.001;view=pdf;seq=00000412</a></p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/59552#59552 Answer by Artur for Awfully sophisticated proof for simple facts Artur 2011-03-25T12:38:33Z 2011-03-25T12:38:33Z <p>In 1993 D.Christodoulou and S.Klainerman proved Global Nonlinear Stability of the Minkowski Space. It was one of the most important results of the mathematical General Relativity. Their proof was published in an over 500-pages book. It stated, that any initial data "sufficiently close" to that corresponding to the Minkowski space will remain so forever. Most of mathematical physicists belived this frankly simple fact but the proof was one of the most sophisticated in the field. </p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/64034#64034 Answer by Timothy Chow for Awfully sophisticated proof for simple facts Timothy Chow 2011-05-05T18:52:45Z 2011-05-05T18:52:45Z <p>There is a simple pigeonhole argument for the following fact, due to Erdős and Szekeres I believe:</p> <blockquote> <p>In any sequence $a_1, a_2, \ldots, a_{mn+1}$ of $mn+1$ distinct integers, there must exist either an increasing subsequence of length $m+1$ or a decreasing subsequence of length $n+1$ (or both).</p> </blockquote> <p>The "sophisticated" proof of this fact is that any Young tableau with $mn+1$ boxes must either have more than $m$ columns or more than $n$ rows, and so the result follows because the number of columns/rows corresponds to the length of the longest increasing/decreasing subsequence of the corresponding permutation under the Robinson--Schensted correspondence.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/64039#64039 Answer by anonymous for Awfully sophisticated proof for simple facts anonymous 2011-05-05T19:37:11Z 2011-05-05T19:37:11Z <p>Here's a topological proof that $\mathbb{Z}$ is a PID.</p> <p>Let $p,q$ be relatively prime. Then the line from the origin to the point $(p,q)\in\mathbb{R}^2$ does not pass through any lattice point, and therefore defines a simple closed curve in the torus $\mathbb{T}=\mathbb{R}^2/\mathbb{Z}^2$. Cut the torus along this curve. By classification of surfaces, the resulting surface is a cylinder. Therefore, we can reglue it to get a torus, but where our simple closed curve is now a "stupid" such thing, i.e., a ring around the torus.</p> <p>Which is all to say that in this case, there exists an automorphism of the torus which takes $(p,q)\in\mathbb{Z}^2=\pi_1(\mathbb{T})$ to $(1,0)$. But this gives a matrix <code>$\begin{bmatrix} p &amp; x \\ q &amp; y \end{bmatrix}\in GL_2(\mathbb{Z})$</code>, so $py-qx\in\mathbb{Z}^{\times}$, i.e., $py-qx=\pm 1$.</p> <p>The only two things this proof needs are the computation of the homology of a torus and the classification of surfaces, neither of which actually relies on $\mathbb{Z}$ being a PID!</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/64046#64046 Answer by Jan Weidner for Awfully sophisticated proof for simple facts Jan Weidner 2011-05-05T20:22:28Z 2011-05-05T20:22:28Z <p>Every finite dimensional complex representation of a finite cyclic group decomposes into a direct sum of irreducible representations. This can be deduced from the decomposition theorem for perverse sheaves as follows: It is enough to show that the group algebra is semi simple. To check this it is enough to lift the regular representation of $\mathbb Z/n$ to $\mathbb Z=\pi^1(\mathbb C^*)$ and show that it decomposes into a direct sum of irreducible representations of $\mathbb Z$.</p> <p>Consider the covering $z \mapsto z^n$ of $\mathbb C^*$ by itself.</p> <p>It is easy to see, that the monodromy action on the pushforward of the constant sheaf $\mathbb C[1]$ along this map coincides with the regular representation. On the other hand since the map is small, the decomposition theorem guarantees that the pushforward decomposes into a direct sum of IC complexes. Since our map is a covering and our space is smooth these are actually irreducible local systems on $\mathbb C^*$. But irreducible local systems correspond to irreducible representation of the fundamental group.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/64048#64048 Answer by Jan Weidner for Awfully sophisticated proof for simple facts Jan Weidner 2011-05-05T20:34:00Z 2011-05-05T20:34:00Z <p>The space $C[0,1]$ is not reflexive. If it was, it also had a predual. But then it would be a von Neumann algebra. However von Neumann algebras correspond to very strange topological spaces which have the property that closures of open subsets are again open. Clearly this is not the case for $[0,1]$.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/66971#66971 Answer by Manuel Araújo for Awfully sophisticated proof for simple facts Manuel Araújo 2011-06-05T14:45:35Z 2011-06-05T14:45:35Z <p>Every finite integral domain is a field:</p> <p>Let $D$ be a finite integral domain. Being finite, it is Artinian an Noetherian and therefore has Krull dimension zero. But $(0)$ is a prime ideal, because $D$ is a domain, therefore $(0)$ is a maximal ideal and $D$ is a field. </p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/66975#66975 Answer by Bill Johnson for Awfully sophisticated proof for simple facts Bill Johnson 2011-06-05T16:54:15Z 2011-06-05T16:54:15Z <p>If $0\le f_n \le 1$ is a sequence of continuous functions on $[0,1]$ that converges pointwise to $0$, then $\int_0^1 f_n(t) dt$ converges to $0$. Understandable by freshman, the statement is hard to prove using only the tools of calculus but is immediate from the dominated convergence theorem.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/67865#67865 Answer by MTS for Awfully sophisticated proof for simple facts MTS 2011-06-15T13:32:28Z 2011-06-15T13:32:28Z <p>One can use the continuous functional calculus of a C$^*$-algebra (namely $M_N(\mathbb{C})$) to prove that a normal matrix is diagonalizable.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/68231#68231 Answer by godelian for Awfully sophisticated proof for simple facts godelian 2011-06-19T19:17:26Z 2011-06-19T19:17:26Z <p>There is an elementary problem that goes more or less like this: you have a special telephone keyboard with nine lighted buttons (one for each number from $1$ to $9$); when pushing each button other than number $5$ (the central button) then this switches the state of the lights of the button itself and of all its surrounding buttons; pushing number $5$ only switches the state of the lights of its surrounding buttons, but not of itself. Starting with all lights off, the question asks whether we can get all lights on by pushing buttons. The obvious solution to the negative answer relies on the fact that the parity of lighted buttons at every state of the keyboard is an invariant. But there is also a sophisticated solution.</p> <p>Take the set $X$ of $9$ elements and think of $\mathcal{P}(X)$ as a vector space over the field $\mathbb{Z}_2$ with the sum being the symmetric difference and the product given by $0.v=\emptyset$ and $1.v=v$. Then we can identify each state of the keyboard with a corresponding vector in this space, while pushing the button $i$ corresponds to summing a special vector $v_i$ (associated to the button) to the vector representing the state of the keyboard. Thus, we are wondering if there are some scalars $\alpha_i$ such that $\sum_{i=1}^{9} \alpha_iv_i=X$. Writing each $v_i$ and $X$ in the base of the space given by the singleton elements $1, ..., 9$, we get a system of linear equations which can be seen to have no solutions by computing the $9 \times 9$ determinant and verifying it is null. </p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/69838#69838 Answer by Benjamin Steinberg for Awfully sophisticated proof for simple facts Benjamin Steinberg 2011-07-09T00:17:00Z 2011-07-09T00:17:00Z <p>Proving the Banach fixed point theorem for compact metric spaces using the structure of monothetic compact semigroups.</p> <p>Thm. Let $X$ be a compact metric space and $f\colon X\to X$ a strict contraction, meaning $d(f(x),f(y))&lt; d(x,y)$ for $x\neq y$. Then $f$ has a unique fixed point and for any $x_0\in X$, the iterates $f^n(x_0)$ converges to the fixed point. Pf. Contractions are clearly equicontinuous, so by the Arzelà–Ascoli theorem, the closed subsemigroup $S$ generated by $f$ is compact in the compact-open topology. Now, a monothetic compact semigroup has a unique minimal ideal $I$, which is a compact abelian group. Moreover, either $S$ is finite and $I$ consists of all sufficiently high powers of $f$ or $S$ is infinite and $I$ consists of all limit points of the sequence $f^n$. In either case, $I$ consists of strict contractions, being in the ideal generated by $f$. Thus the identity element $e$ of $I$ is a constant map, being an idempotent strict contraction. Thus $I={e}$, being a group. Thus $f^n$ converges to a constant map to some point $y$. Clearly $y$ is the unique fixed point of $f$.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/73955#73955 Answer by Gil Kalai for Awfully sophisticated proof for simple facts Gil Kalai 2011-08-29T09:36:35Z 2011-08-29T09:36:35Z <p>Arrow's theorem is a basic result in social choice theory which has several simple proofs. (For three proofs see this paper: <a href="http://cowles.econ.yale.edu/P/cp/p11a/p1116.pdf" rel="nofollow">Three Brief Proofs of Arrow's Impossibility Theorem by J. Geanakoplos</a>)</p> <p>It also has a few complicated proofs: The paper by Tang, Pingzhong and Lin, Fangzhen Computer-aided proofs of Arrow's and other impossibility theorems, Artificial Intelligence 173 (2009), no. 11, 1041–1053. Gives an inductive proof based on rather complicted inductive step and a computerized check for the base case. The paper by Yuliy Baryshnikov, Unifying impossibility theorems: a topological approach. Adv. in Appl. Math. 14 (1993), 404–415, gives a proof based on algebraic topology. My paper: A Fourier-theoretic perspective on the Condorcet paradox and Arrow's theorem. Adv. in Appl. Math. 29 (2002), 412–426, gives a fairly complicated Fourier-theoretic proof but only to a special case of the theorem. </p> <p>(A complicated proof to a related theorem is by Shelah, Saharon, On the Arrow property, Adv. in Appl. Math. 34 (2005), 217–251.)</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/77990#77990 Answer by Woett for Awfully sophisticated proof for simple facts Woett 2011-10-13T03:28:07Z 2011-10-13T03:59:48Z <p>An olympiad-type question I once tried to solve was: prove that all integers $>1$ can be written as a sum of two squarefree integers$^{[1]}$. The proof I came up with (which uses at least $3$ non-trivial results!) went as follows:</p> <p>We can check that it holds for $n \le 10^4$. Now, let $S$ be the set of all squarefree integers, except for the primes larger than $10^4$. Then by the fact that the Schnirelmann density of the set of squarefree integers equals $\dfrac{53}{88}$ $^{[2]}$ and some decent estimate on the prime counting function$^{[3]}$, we have that the Schnirelmann density of $S$ must be larger than $\dfrac{1}{2}$. By Mann's Theorem$^{[4]}$ we now have that every positive integer can be written as sum of at most $2$ elements of $S$. In particular, every prime number can be written as sum of $2$ elements of $S$, and every integer that is not squarefree can be written as sum of $2$ elements of $S$. All there is now left, is proving the theorem for composite squarefree numbers; $n = pq = (p_1 + p_2)q = p_1q + p_2q$, where $p$ is the smallest prime dividing $n$ and $p_1, p_2$ are squarefree integers.</p> <p>$^{[1]}$ <a href="http://www.artofproblemsolving.com/Forum/viewtopic.php?f=470&amp;t=150908" rel="nofollow">http://www.artofproblemsolving.com/Forum/viewtopic.php?f=470&amp;t=150908</a> $^{[2]}$ <a href="http://www.jstor.org/pss/2034736" rel="nofollow">http://www.jstor.org/pss/2034736</a> $^{[3]}$ <a href="http://en.wikipedia.org/wiki/Prime-counting_function#Inequalities" rel="nofollow">http://en.wikipedia.org/wiki/Prime-counting_function#Inequalities</a> $^{[4]}$ <a href="http://mathworld.wolfram.com/MannsTheorem.html" rel="nofollow">http://mathworld.wolfram.com/MannsTheorem.html</a></p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/78003#78003 Answer by Matthias Künzer for Awfully sophisticated proof for simple facts Matthias Künzer 2011-10-13T06:42:05Z 2011-10-13T06:42:05Z <p>(1) Let $G$ be a finite group. Let $H\leqslant G$ be a subgroup of index $2$. Let us prove that $H$ is normal in $G$. Let $L|K$ be a Galois extension of fields with Galois group $G$ (easily constructed via a representation of $G$ as a permutation group, taking $L$ to be a function field in suitably many variables on which $G$ acts and $K$ to be the fixed field under $G$). Let $F$ be the fixed field in $L$ under $H$. Then $F|K$ is a quadratic extension, hence normal. By the Main Theorem of Galois Theory, it follows that $H$ is normal in $G$.</p> <p>(2) Let $G$ be a finite group. Let $K$ be a finite field of characteristic not dividing $|G|$. Let us prove Maschke's Theorem in this situation: $KG$ is semisimple. Given two finite dimensional $KG$-modules $X$ and $Y$, it suffices to show that $\text{Ext}^1_{KG}(X,Y) = 0$. But $\text{Ext}^1_{KG}(X,Y) = \text{H}^1(G,\text{Hom}_K(X,Y)) = 0$, since $|G|$ and $|\text{Hom}_K(X,Y)|$ are coprime.</p> <p>(Well, not sure whether any of these arguments are really awfully sophisticated. It's rather breaking a butterfly on a small wheel.)</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/78104#78104 Answer by none for Awfully sophisticated proof for simple facts none 2011-10-14T07:27:32Z 2011-10-14T07:27:32Z <p>Dan Bernstein, "A New Proof that 83 is prime", <a href="http://cr.yp.to/talks/2003.03.23/slides.pdf" rel="nofollow">http://cr.yp.to/talks/2003.03.23/slides.pdf</a></p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/90201#90201 Answer by Zsbán Ambrus for Awfully sophisticated proof for simple facts Zsbán Ambrus 2012-03-04T13:54:46Z 2013-03-28T23:17:46Z <p>The following theorem has several essentially different proofs that need quite different levels of mathematical background, ranging from high school to graduate level. Which proof is most natural depends on who you ask, but many people (including me) will find at least some proof unnecessarily complicated. </p> <blockquote> <p>There exists a set $A$ that is everywhere dense on the square $[0, 1]^2$, but such that for any real number $x$, the intersections $A \cap (\{x\} \times [0, 1])$ and $A \cap ([0, 1] \times \{x\})$ are both finite.</p> </blockquote> <p>(This is a variant of a homework problem posed by Sági Gábor.)</p> <p>Here's the idea of a few proofs.</p> <ul> <li><p>$A = \{(p/r, q/r) \mid p, q, r \in \mathbb{Z} \text{ and } \gcd(p,r) = \gcd(q,r) = 1 \}$ is dense because if you subdivide the square to $2^n$ times $2^n$ squares, $A$ contains the center of each square; and has only as many points on each horizontal or vertical line as the denominator of $x$.</p></li> <li><p>$A = \{(x + y\sqrt3, y - x\sqrt3) \mid x, y\in\mathbb{Q} \}$ is dense because it's a scaled rotation of $\mathbb{Q}^2$, but has at most one point on every horizontal or vertical line otherwise $\sqrt3$ would be rational.</p></li> <li><p>Choose $a_0, b_0, a_1, b_1$ as four reals linear independent over rationals, this is possible because of cardinalities. <code>$A = \{(ma_0 + na_1, mb_0 + nb_1) \mid m, n \in \mathbb{Q}\}$</code> has no two points sharing coordinates because of rational independence, and $A$ is dense because it's a non-singular affine image of $\mathbb{Q}^2$.</p></li> <li><p><em>A</em> is the set of a countably infinite sequence of random points independent and uniform on the square. This is almost surely dense, but almost surely has no two points that share a coordinate. </p></li> <li><p>Choose a countable topological base of the square, then choose a point from each of its elements inductively such that you never choose a point that shares a coordinate with any point chosen previously.</p></li> <li><p>Choose a continuum (or smaller) size topological base of the square, then choose a point from each by transfinite induction such that when you choose a point, the cardinality of points chosen previously is less than continuum, thus you can avoid sharing coordinates with those points. </p></li> <li><p>Choose $a, b$ as reals such that $a, b, 1$ are linear independent over rationals, possible because of cardinalities. Let <code>$A = \{((ma + nb) \bmod 1, (ma - nb) \bmod 1) \mid m, n \in \mathbb{Z}\}$</code>. No two points share coordinates because of rational independence. Looking on the torus, <em>A</em> is dense somewhere on the square and the difference of any two points of <em>A</em> is in <em>A</em> so it must be dense in the origin. As <em>A</em> is closed to addition, it must be dense on a line passing through the origin. As it's also closed to rotation by $\pi/2$, it's also dense on the rotation of that line, thus, because it's closed to addition, dense everywhere.</p></li> <li><p>Choose $a, b$ like above. Let <code>$A = \{(an \bmod 1, bn \bmod 1) \mid n \in \mathbb{Z}\}$</code>. Prove <em>A</em> is dense by ergodic theory and Fourier analysis. </p></li> </ul> <p><strong>Update:</strong> Edited the drafts of proofs to somewhat cleaner. Permuted proofs. Also fixed typo in last proof.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/104545#104545 Answer by unknown (google) for Awfully sophisticated proof for simple facts unknown (google) 2012-08-12T10:02:54Z 2012-08-12T10:13:30Z <p>The density Hales-Jewett theorem implies that there cannot exist perfect magic hypercubes of fixed side length $k$ and arbitrarily high dimension $n$ whose cells are filled with the consecutive numbers $1,2,\dots,k^n$ and for which the numbers in cells along any geometric line sum to the magic constant $\frac{k(k^n+1)}{2}$.</p> <p>For, take the cells with numbers <code>$1,2,\dots,\left\lfloor\frac{k^n}{2}\right\rfloor$</code>. </p> <p>This always has density about $1/2$, and so by the density Hales-Jewett theorem, will contain a hyperline for sufficiently large $n$. But no $k$ numbers from this set of density about $1/2$ can ever sum to the magic constant.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/105064#105064 Answer by unknown (google) for Awfully sophisticated proof for simple facts unknown (google) 2012-08-20T00:28:41Z 2012-08-20T00:28:41Z <p>Not really sure if this should count, but: From <a href="http://www.fen.bilkent.edu.tr/~franz/nt/cheb.pdf" rel="nofollow">Chebyshev's proof</a> using the central binomial coefficient that there exists some constant $C>0$ such that</p> <p>$$\pi(x) &lt; C\frac{x}{\log x}$$</p> <p>for sufficiently large $x$, and from the infinitude of primes, we get that </p> <p>$$\log x \ll x.$$</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/105102#105102 Answer by Kjetil B Halvorsen for Awfully sophisticated proof for simple facts Kjetil B Halvorsen 2012-08-20T17:54:23Z 2012-08-20T17:54:23Z <p>The Herbert Simon (Nobel Price Winner, Economics, 1978)--- Karl Egil Aubert Dispute, see</p> <p><a href="http://www.tandfonline.com/doi/abs/10.1080/00201748208601972" rel="nofollow">http://www.tandfonline.com/doi/abs/10.1080/00201748208601972</a></p> <p>Aubert criticizes Simon for irrelevant use of mathematics for his "Application", but also for the fact that he uses the Brouwer fixed point theorem for a proof, when the Intermediate Value Theorem would be enough.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/111875#111875 Answer by Pål GD for Awfully sophisticated proof for simple facts Pål GD 2012-11-09T08:45:07Z 2012-11-09T08:45:07Z <p>$Forest$ is in $P$. Given a finite undirected graph $G$ one can in polynomial time decide whether the input is a forest. The class of all finite forests is a minor-closed property and by the <strong>Robertson–Seymour theorem</strong>, there are finitely many forbidden minors. We can in $O(n^3)$ time test whether $G$ contains a forbidden minor and if not, output yes.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/111899#111899 Answer by John Stalfos for Awfully sophisticated proof for simple facts John Stalfos 2012-11-09T14:32:52Z 2012-11-09T14:32:52Z <p>This is quite late(and just a restatement of the regular proof in fancy terms), but I came around this while goofing off one day:</p> <p>Theorem: Let $X$ a space, and $\mathscr{F}$ a sheaf of (not necessarily abelian) groups, and denote by $\pi$ the projection from the étalé space $Sp\acute{e}(\mathscr{F})$. Then $\Gamma(X,\mathscr{F})$ inject into $\mathrm{Aut}(\pi)$(taken in the category of spaces étalé over $X$).</p> <p>Proof: Straightforward and not difficult(but there are a bunch of things to check).</p> <p>Theorem: (Cayley's theorem) Let $G$ a finite group, then $G$ is a subgroup of a symmetric group.</p> <p>Proof. Let $X$ a nonempty, connected topological space and take $\mathbb{G}$ the constant sheaf associated to $G$ on $X$. Apply previous theorem and notice that $Sp\acute{e}(\mathbb{G})$ is a globally trivial covering space, and homeomorphic(over $X$) to $\coprod_{|G|} X$, so that $G$ injects into the group of deck transformations of this covering space, which is just $\mathfrak{S}_{|G|}$!</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/111930#111930 Answer by Ramón Barral for Awfully sophisticated proof for simple facts Ramón Barral 2012-11-09T19:32:14Z 2012-11-09T19:32:14Z <p>Seen on <a href="http://legauss.blogspot.com.es/2012/05/para-rir-ou-para-chorar-parte-13.html" rel="nofollow">http://legauss.blogspot.com.es/2012/05/para-rir-ou-para-chorar-parte-13.html</a></p> <p>Theorem: $5!/2$ is even.</p> <p>Proof: $5!/2$ is the order of the group $A_5$. It is known that $A_5$ is a non-abelian simple group. Therefore $A_5$ is not solvable. But the Feit-Thompson Theorem asserts that every finite group with odd cardinal is solvable, so $5!/2$ must be an even number.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/116957#116957 Answer by ACL for Awfully sophisticated proof for simple facts ACL 2012-12-21T09:02:02Z 2012-12-21T09:02:02Z <p>Liouville remarked that the fundamental theorem of algebra could be derived from his theorem that elliptic functions (doubly periodic meromorphic functions of one complex variable) must have poles. The proof goes by substituting the inverse of a polynomial as the argument of, say, Weierstrass $\wp$-function with large enough periods, and observing that it has no poles.</p> <p>Of course, the proof of Liouville's theorem on elliptic functions requires the same kind of arguments used for proving the famous Liouville theorem (due to Cauchy) that bounded holomorphic functions are bounded and, apparently, already used before by Cauchy for algebraic functions.</p> <p>But Liouville's observation is really more complicated than the present proof. What it simplifies, however, is the compactness argument. For elliptic functions, or for algebraic functions, one has at hand a compact Riemann surface on which some holomorphic function is bounded, hence achieves its supremum, etc. This may be the reason why the general form of Liouville theorem came only after the case of algebraic or elliptic functions.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/116973#116973 Answer by Pablo Zadunaisky for Awfully sophisticated proof for simple facts Pablo Zadunaisky 2012-12-21T14:28:42Z 2012-12-21T14:28:42Z <p>The skew-field of quaternions $\mathbb H$ is isomorphic to its opposite algebra. </p> <p>Indeed, by a theorem of Frobenius, division algebras over the reals are isomorphic to either $\mathbb R, \mathbb C$ or $\mathbb H$. Since $\mathbb H^\mathsf{opp}$ is again a division algebra, it must be isomorphic to one of these. There are several ways to conclude: since it is four dimensional, or since it is not commutative, or since it has more than two square roots of $-1$, etc., we conclude that the only possibility is $\mathbb H \cong \mathbb H^\mathsf{opp}$.</p> <p>If you are only interested in Morita equivalence between these two algebras, you can do better: the Brauer group of $\mathbb R$ is isomorphic to $\mathbb Z_2$, and so all elements are of order $2$. This implies that the class of $\mathbb H$ coincides with its inverse, which is the class of $\mathbb H^{\mathsf{opp}}$. Thus $\mathbb H$ and $\mathbb H^\mathsf{opp}$ are Morita equivalent.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/116986#116986 Answer by Johannes Ebert for Awfully sophisticated proof for simple facts Johannes Ebert 2012-12-21T19:13:22Z 2012-12-21T19:13:22Z <p>The fundamental theorem of algebra holds because:</p> <ol> <li><p>For each degree $n$ normed polynomial $p$ over the complex numbers, there is an $n \times n$ matrix $A$ with characteristic polynomial $\pm p$.</p></li> <li><p>We show that $A$ has an eigenvector.</p></li> <li><p>We may assume that $0$ is not an eigenvalue of $A$ (otherwise $p(0)=0$), so $A \in GL_n (\mathbb{C})$.</p></li> <li><p>$A$ induces a self-map $f_A$ of $CP^{n-1}$, and the eigenspaces of $A$ correspond to the fixed points of $f_A$; so we need to show that $A$ has a fixed point.</p></li> <li><p>As $GL_n (\mathbb{C})$ is connected, $f_A$ is homotopic to the identity (this does not depend on the fundamental theorem of algebra; if $A \in GL_n (\mathbb{C})$, then $z 1 + (1-z )A$ is invertible except for a finite number of values of $z$; and the complement of a finite set of points of the plane is path-connected (this follows, for example, from the transversality theorem).</p></li> <li><p>The Lefschetz number of the identity on $CP^{n-1}$ equals $n\neq 0$, thus the Lefschetz number of $f_A$ is not zero.</p></li> <li><p>By the Lefschetz fixed point theorem, $f_A$ has a fixed point.</p></li> </ol> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/117582#117582 Answer by Ron Maimon for Awfully sophisticated proof for simple facts Ron Maimon 2012-12-30T02:09:50Z 2012-12-30T02:09:50Z <p>Kn is non-planar for n>4: it contradicts the four-color theorem.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/117588#117588 Answer by Benjamin Steinberg for Awfully sophisticated proof for simple facts Benjamin Steinberg 2012-12-30T04:58:58Z 2012-12-30T04:58:58Z <p>Here is a Ramsey theory proof every finite semigroup has an idempotent. Let S be a finite semigroup with finite generating set A. Choose an infinite word $a_1a_2\cdots$ over A. Color the complete graph on 0,1,2... by coloring the edge from i to j with $i\lneq j$ by the image in S of $a_{i+1}\cdots a_j$. By Ramsey's theorem there is a monochromatic clique $i\lneq j\lneq k$. This means $$a_{i+1}\cdots a_j=a_{j+1}\cdots a_k=a_{i+1}\cdots a_k$$ is an idempotent. </p> <p>This proof, generalized to larger clique sizes, actually shows any infinite word contains arbitrarily long consecutive subwords mapping to the same idempotent of S, which is used in studying automata over infinite words. </p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/119832#119832 Answer by Martin Brandenburg for Awfully sophisticated proof for simple facts Martin Brandenburg 2013-01-25T13:46:50Z 2013-02-23T17:25:35Z <p>$5/2 = 2 \frac{1}{2}$ since both are the groupoid cardinality of the following action:</p> <p><img src="http://imageshack.us/a/img132/228/actionj.png" alt="image"></p> <p>Thinking about this, it is actually quite enlightening. For more information, see the wonderful paper <a href="http://arxiv.org/abs/math/0004133" rel="nofollow">From Finite Sets to Feynman Diagrams</a> by John Baez and James Dolan.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/120844#120844 Answer by J. H. S. for Awfully sophisticated proof for simple facts J. H. S. 2013-02-05T08:18:33Z 2013-02-19T01:23:45Z <p>I think that the following proof of the fact that <em>every subgroup of index</em> $2$ <em>of a given group is normal</em> might count too. When I first came up with it (sometime during my sophomore year), I believed that I had just found the entrance to a <em>royal</em> road to mathematics. </p> <p>Let $H\leq G$ be such that $[G:H]=2$. We'll prove that every right coset of $H$ is equal to a left coset of $H$.</p> <p>Since $[G:H]=2$, $G$ is both the union of two disjoint right cosets of $H$ and the union of two disjoint left cosets of $H$. Let us suppose that $G=He \cup Hx = eH \cup yH$ where $x,y\in G\setminus H$ and $e$ denotes the identity element of $G$. According to standard lore regarding the symmetric difference of sets,</p> <p>$He \cup Hx = He \triangle Hx \triangle (He \cap Hx) = He \triangle Hx \triangle \emptyset = H \triangle (Hx\triangle \emptyset) = H\triangle Hx$</p> <p>and</p> <p>$eH \cup yH = eH \triangle yH \triangle (eH \cap yH) = eH \triangle yH \triangle \emptyset = H \triangle (yH \triangle \emptyset) = H \triangle yH$.</p> <p>Therefore, $H\triangle Hx = H\triangle yH$. Canceling $H$ on both sides of the latter equality—which is perfectly valid given that $(2^G, \triangle)$ is a group—we conclude that $Hx=yH$. Done.</p> <p>If you consider that the prior argument doesn't qualify as awfully sophisticated, there is still another fancy way to derive the result in question. As a consequence of <strong>P. Hall</strong>'s famous marriage theorem, <strong>M. Hall</strong> proves in <strong>Theorem 5.1.7</strong> of his <em>Combinatorial Theory</em> that <em>if</em> $H$ <em>is a finite</em> index <em>subgroup of</em> $G$, <em>there exists a set of elements that are simultaneously representatives for the right cosets of</em> $H$ <em>and the left cosets of</em> $H$ (once he's proven the said theorem, he adds: "Simultaneous right-and-left coset representatives exist for a subgroup in a variety of other circumstances. This problem has been investigated by Ore <a href="http://www.ams.org.pbidi.unam.mx:8080/journals/proc/1958-009-04/S0002-9939-1958-0100639-2/home.html" rel="nofollow">1</a>."). In the case $[G:H]=2$, this implies at once that every right coset of $H$ is equal to a left coset of $H$ and we are done...</p> <p>Last but not least, $[G:H]=2 \Rightarrow H \trianglelefteq G$ in the case when $|G|&lt;\infty$ can also be seen a consequence of the well-known fact according to which any subgroup of a finite group whose index is equal to the smallest prime that divides the order of the group is of necessity a normal subgroup of the group. <strong>B. R. Gelbaum</strong> showcases in one of his books an <em>action-free</em> proof of this fact. He attributes both the fact and the <em>action-free</em> proof to <strong>Ernst G. Straus</strong>. Does any of you know on what grounds he did so? I have a Xerox copy of the relevant page in the book here. This is exactly what Gelbaum writes therein:</p> <blockquote> <p>At some time in the early 1940s Ernst G. Straus, sitting in a group theory class, saw the proof of the ... result [i.e., $[G:H]=2 \Rightarrow H \trianglelefteq G$] ... and immediately conjectured (and proved that night): ... IF G:H [sic] IS THE SMALLEST PRIME DIVISOR P of #(G) THEN H IS A NORMAL SUBGROUP.</p> </blockquote> <p>P.S. The Galois-theoretic proof given by <a href="http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/78003#78003" rel="nofollow">Matthias Künzer</a> is just fabulous! </p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/122684#122684 Answer by Alexander Gruber for Awfully sophisticated proof for simple facts Alexander Gruber 2013-02-23T01:41:41Z 2013-02-23T01:41:41Z <p>This is kind of an elementary example, but I always thought it was funny to prove that $S_3$ is isomorphic to a subgroup of $S_6$ using Cayley's theorem.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/122787#122787 Answer by Brendan McKay for Awfully sophisticated proof for simple facts Brendan McKay 2013-02-24T07:35:39Z 2013-02-24T22:22:49Z <p>The sum of the degrees of the vertices of a graph is even.</p> <p>Proof: The number $N$ of graphs with degrees $d_1,\ldots,d_n$ is the coefficient of $x_1^{d_1}\cdots x_n^{d_n}$ in the generating function $\prod_{j\lt k}(1+x_jx_k)$. Now apply Cauchy's Theorem in $n$ complex dimensions to find that $$N = \frac{1}{(2\pi i)^n} \oint\cdots\oint \frac{\prod_{j\lt k}(1+x_jx_k)}{x_1^{d_1+1}\cdots x_n^{d_n+1}} dx_1\cdots dx_n,$$ where each integral is a simple closed contour enclosing the origin once. Choosing the circles $x_j=e^{i\theta_j}$, we get $$N = \frac{1}{(2\pi)^n} \int_{-\pi}^\pi\cdots\int_{-\pi}^\pi \frac{\prod_{j\lt k}(1+e^{\theta_j+\theta_k})}{e^{i(d_1\theta_1+\cdots +d_n\theta_n)}} d\theta_1\cdots d\theta_n.$$ Alternatively, choosing the circles $x_j=e^{i(\theta_j+\pi)}$, we get $$N = \frac{1}{(2\pi)^n} \int_{-\pi}^\pi\cdots\int_{-\pi}^\pi \frac{\prod_{j\lt k}(1+e^{\theta_j+\theta_k})}{e^{i(d_1\theta_1+\cdots +d_n\theta_n+k\pi)}} d\theta_1\cdots d\theta_n,$$ where $k=d_1+\cdots+d_n$. Since $e^{ik\pi}=-1$ when $k$ is an odd integer, we can add these two integrals to get $2N=0$.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/122820#122820 Answer by practical for Awfully sophisticated proof for simple facts practical 2013-02-24T19:57:30Z 2013-02-24T19:57:30Z <p>There exists transcendantal numbers because:</p> <p>-- $x\mapsto \frac{1}{[{\mathbb Q}(x):{\mathbb Q}]}{\rm Tr}_{{\mathbb Q}(x)/{\mathbb Q}}x$ is a well defined, non zero, linear form from $\bar{\mathbb Q}$ to ${\mathbb Q}$.</p> <p>-- The kernel of a non zero linear form form ${\mathbb R}$ to ${\mathbb Q}$ is not measurable.</p> <p>-- By Solovay, every subset of ${\mathbb R}$ can be assumed to be measurable.</p> <p>Conclusion: ${\mathbb R}\neq \bar{\mathbb Q}$.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/125812#125812 Answer by Dietrich Burde for Awfully sophisticated proof for simple facts Dietrich Burde 2013-03-28T10:37:42Z 2013-03-28T10:37:42Z <p>One can also show with Fermat's last theorem that $\sqrt{2}$ is irrational - the answer of mt did $2^{1/n}$ for $n\ge 3$. Suppose that $\sqrt{2}$ is rational. Then there is a right-angled triangle with rational sides $(a,b,c)=(\sqrt{2},\sqrt{2},2)$ and area 1. Hence $1$ would be a congruent number. This contradicts Fermat's last theorem with exponent $4$.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/130318#130318 Answer by Wlodzimierz Holsztynski for Awfully sophisticated proof for simple facts Wlodzimierz Holsztynski 2013-05-11T04:02:26Z 2013-05-11T04:02:26Z <p>Around year 1970 a popular way to compute cohomology groups of the finite cyclic groups was by applying spectral sequences (which was quite an overkill).</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/130612#130612 Answer by Dominic Michaelis for Awfully sophisticated proof for simple facts Dominic Michaelis 2013-05-14T17:47:39Z 2013-05-14T17:47:39Z <p>As Helfgott uploaded a proof of the weak Goldbach conjecture it is now possible (but I guess circular) to proof that there are infinity many primes in this way.</p> <p>Suppose there are only finite many primes, let $p_{\max}$ be the highest prime number, then $3 p_{\max}+2$ would be an odd number which is not the sum of 3 primes in contradiction to goldbachs weak conjecture.</p> http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/130627#130627 Answer by Toink for Awfully sophisticated proof for simple facts Toink 2013-05-14T20:03:11Z 2013-05-14T20:03:11Z <p>Claim: $\sum\limits_{k=0}^n (-1)^k {n\choose k} = 0$ for all integers $n≥1$</p> <p>Proof: Take the $n-1$-dimensional simplex $\Delta_{n-1}$. We can compute it's Euler characteristic by using simplicial homology. There are exactly $n \choose k+1$ many $k$-sub-simplexes of $\Delta_{n-1}$. Thus we get a simplicial chain complex of the form $\mathbb{Z}^{n\choose n} \to \mathbb{Z}^{n\choose n-1} \to \cdots \to \mathbb{Z}^{n\choose 2}\to\mathbb{Z}^{n\choose 1}$. So the Euler characteristic is $\chi(\Delta_{n-1}) = \sum\limits_{k=0}^{n-1} (-1)^k {n\choose k+1}=-\sum\limits_{k=1}^{n} (-1)^k {n\choose k}$<br> On the other hand $\Delta_{n-1}$ is contractible, and $\chi$ is homotopy-equivalence-invariant, so $\chi(\Delta_{n-1})=\chi(pt) =1$.<br> Putting those toghether we obtain: $0=\chi(\Delta_{n-1})-\chi(\Delta_{n-1})=1+\sum\limits_{k=1}^{n} (-1)^k {n\choose k}=\sum\limits_{k=0}^n (-1)^k {n\choose k}$</p>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515854120254517, "perplexity": 733.5350121684709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00058-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.computer.org/csdl/trans/tc/1979/02/01675303-abs.html
Issue No. 02 - February (1979 vol. 28) ISSN: 0018-9340 pp: 142-147 H. Niemann , Friedrich-Alexander-Universitat, Institut fur Mathematische Maschinen und Datenverarbeitung ABSTRACT An iterative algorithm for nonlinear mapping of high-dimensional data is developed. The step size of the descent algorithm is chosen to assure convergence. Steepest descent and Coordinate descent are treated. The algorithm is applied to artificial and real data to demonstrate its excellent convergence properties. INDEX TERMS unsupervised learning, Cluster analysis, coordinate descent, dimensionality reduction, iterative algorithm, nonlinear mapping, steepest descent CITATION H. Niemann, J. Weiss, "A Fast-Converging Algorithm for Nonlinear Mapping of High-Dimensional Data to a Plane", IEEE Transactions on Computers, vol. 28, no. , pp. 142-147, February 1979, doi:10.1109/TC.1979.1675303
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8705353140830994, "perplexity": 4504.926254764894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719877.27/warc/CC-MAIN-20161020183839-00331-ip-10-171-6-4.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/79250/drawing-multiple-arrows-at-arbitrary-angles-in-a-common-direction-with-overlap
# Drawing multiple arrows at arbitrary angles in a common direction, with overlap I am trying to draw a diagram with tikz that has multiple arrows (here I have 4) between nodes, where the nodes are placed arbitrarily in a diagram. My best (that is, shortest and easiest to understand) solution has been to use the draw option double, but this creates unwanted overlap of the arrows, as in I cannot see the arrows below other arrows, and I would like to see all of them. My current situation has the below code: \documentclass{article} \usepackage{tikz} \usetikzlibrary{calc} \tikzset{state/.style={circle,draw=black, very thick,minimum size=8ex,fill=white}} \begin{document} \begin{tikzpicture} \node (x) at (0,0) {}; \coordinate (1) at ($(x) + (125:2)$) ; \coordinate (2) at ($(x) + (55:2)$) ; \coordinate (3) at ($(x) + (350:2)$); \coordinate (t1) at ($(x) + (190:2)$); \foreach \i\j in {1/2,2/3,1/t1,2/t1,1/3,3/t1} { \draw[draw=black,double distance=15pt] (\i) to (\j); \draw[draw=black,double distance=5pt] (\i) to (\j);} \node[state] (1a) at (1) {1}; \node[state] (2a) at (2) {2}; \node[state] (3a) at (3) {3}; \node[state] (t1a) at (t1) {3}; \draw[loosely dotted,ultra thick] ($(x)+(290:1.8)$) to [bend left=30] ($(x)+(250:1.8)$); \end{tikzpicture} \end{document} Yielding the following result: I want to see all the arrow lines, not just the ones drawn last. This situation happens because with the double option two lines are actually drawn, one thinner than the other, the inside one being colored white. Looking at some similar questions here, the solutions were either case-specific or used nodename.west or similar directions along with a \foreach loop and some \yshift commands. However, since my arrows do not all go either horizontally or vertically, and I would like the normal distance between two adjacent arrows to be the same throughout, and since I am not a tikz whiz, I could not adjust those solutions to my needs. Any help on this issue is much appreciated. - You sure you want the lines being parallel? I could see it looking real nice originating at one point but ending up at different points using the to syntax. –  Peter Grill Oct 27 '12 at 5:59 @PeterGrill Each node in the diagram is supposed to represent a group of vertices of a graph, and the edges are all possible edges between two groups of vertices. So I guess parallel isn't necessary (though I like how it looks), but I don't want all arrows coming out of/going into a single coordinate. However, I think that if they all start at the center of one of the nodes, it might work at creating the desired impression. –  jlv Oct 27 '12 at 10:53 Furnishing further with Jake's PSTricks' ncbar equivalent style (from Is there a TikZ equivalent to the PSTricks \ncbar command? ), it's possible to draw 2n lines instead of n double lines. \documentclass[tikz]{standalone} \usetikzlibrary{calc} \tikzset{ ncbar angle/.initial=90, ncbar/.style={ to path=(\tikztostart) -- ($(\tikztostart)!#1!\pgfkeysvalueof{/tikz/ncbar angle}:(\tikztotarget)$) -- ($(\tikztotarget)!($(\tikztostart)!#1! \pgfkeysvalueof{/tikz/ncbar angle}:(\tikztotarget)$)! \pgfkeysvalueof{/tikz/ncbar angle}:(\tikztostart)$) -- (\tikztotarget) }, ncbar/.default=0.5cm, } \begin{document} \begin{tikzpicture}[ mystate/.style={circle,draw=black, very thick,minimum size=8ex,fill=white} ] \def\mylabellist{{1,2,3,"$t-1$"}} \foreach \x[count=\y] in{125,55,350,190}{\coordinate (n\y) at (\x:2);} \foreach \x in {1,...,3}{ \foreach \y in {\x,...,4}{ \foreach \zz in {90,-90}{ \draw (n\x) to[ncbar=0.1cm,ncbar angle=\zz] (n\y); \draw (n\x) to[ncbar=0.3cm,ncbar angle=\zz] (n\y); } } } \foreach \y in{1,...,4}{\node[mystate] (n\y a) at (n\y) {\pgfmathparse{\mylabellist[\number\numexpr\y-1\relax]}\pgfmathresult};} \end{tikzpicture} \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9247280359268188, "perplexity": 2398.7417838697147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.intechopen.com/books/wavelet-theory-and-its-applications/wavelets-and-lpg-pca-for-image-denoising
Open access peer-reviewed chapter # Wavelets and LPG-PCA for Image Denoising By Mourad Talbi and Med Salim Bouhlel Submitted: October 14th 2017Reviewed: January 26th 2018Published: October 3rd 2018 DOI: 10.5772/intechopen.74453 ## Abstract In this chapter, a new image denoising approach is proposed. It combines two image denoising techniques. The first one is based on a wavelet transform (WT), and the second one is a two-stage image denoising by PCA (principal component analysis) with LPG (local pixel grouping). In this proposed approach, we first apply the first technique to the noisy image in order to obtain the first estimation version of the clean image. Then, we estimate the noise-level from the noisy image. This estimate is obtained by applying the third technique of noise estimation from noisy images. The third step of the proposed approach consists in using the first estimation of the clean image, the noisy image, and the estimate of the noise-level as inputs of the second image denoising system (LPG-PCA). A comparative study of the proposed technique and the two others denoising technique (one is based on WT and and the second is based on LPG-PCA), is performed. This comparative study used a number of noisy images, and the obtained results from PSNR (peak signal-to-noise ratio) and SSIM (structural similarity) computations show that the proposed approach outperforms the two other denoising approaches (the first one is based on WT and the second one is based on LPG-PCA). ### Keywords • image denoising • wavelet transform • noise estimation • LPG-PCA ## 1. Introduction In the image acquisition process, the noise will be inevitably introduced so denoising is a necessary step for ameliorating the image quality. As a primary low-level image processing, noise suppression has been extensively studied, and numerous denoising approaches have been proposed, from the earlier frequency domain denoising approaches and smoothing filters [1] to the lately developed wavelet [2, 3, 4, 5, 6, 7, 8, 9, 10, 11], curvelet- [12] and ridgelet- [13] based approaches, sparse representation [14] and K-SVD approaches [15], shape-adaptive transform [16], bilateral filtering [17, 18], nonlocal mean-based techniques [19, 20], and nonlocal collaborative filtering [21]. With the quick development of modern digital imaging devices and their increasingly broad applications in our daily life, there are rising necessities of new denoising techniques for higher quality of image. The WT (wavelet transform) [22] proved its effectiveness in noise cancelation [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. This transform decomposes the input signal into multiple scales which represent different time-frequency components of the original signal. At each scale, some operations, such as statistical modeling [4, 5, 6] and thresholding [2, 3], can be applied for canceling noise. Noise reduction is performed by transforming back the processed wavelet coefficients into spatial domain. Late development of WT-based denoising techniques includes ridgelet- and curvelet-based techniques [12, 13] for line structure conservation. Despite WT proved its effectiveness in denoising, it uses a fixed wavelet basis (with translation and dilation) for representing image. However, for natural images, a rich amount of different local structural patterns exists and therefore cannot be well represented by using just one fixed wavelet basis. Consequently, WT-based techniques can generate many visual artifacts in the denoising output. To overcome the problem of WT, in [23], Muresan and Parks proposed a spatially adaptive principal component analysis (PCA)-based denoising technique, which computes the locally fitted basis for transforming the image. In [15, 16], Elad and Aharon proposed K-SVD-based denoising approach and sparse and redundant representation by training a highly over-complete dictionary. In [16], Foi et al. applied a shape-adaptive discrete cosine transform (DCT) to the neighborhood, which can attain very sparse representation of the image and consequently lead to efficient denoising. All these approaches proved better denoising performance than classical WT-based denoising techniques. The NLM (nonlocal means) schemes used a very different philosophy from the above approaches in noise cancelation. The NLM idea can be traced back to [24], where the similar image pixels are averaged according to their intensity distance. Similar ideas were used in the bilateral filtering schemes [17, 18], where both the spatial and intensity similarities are exploited for pixel averaging. The NLM denoising framework was well established in [19]. In the image, each pixel is estimated as the weighted average of all the pixels and the weights are determined by the similarity between the pixels. This scheme was improved in [20], where the pair-wise hypothesis testing was used in the NLM estimation. Inspired from the success of NLM schemes, Dabov et al. [21] proposed a collaborative image denoising scheme by sparse 3D transform and patch matching. They look for similar blocks in the image by using block matching and grouped these blocks into a 3D cube. Then, a sparse 3D transform was applied to that cube, and noise was canceled by Wiener filtering in the transformed domain. The so-called BM3D approach attains remarkable denoising results, yet its implementation is a little complex. Lei Zhang et al. [25] have presented an efficient PCA-based denoising approach with local pixel grouping (LPG). PCA is a classical de-correlation technique in statistical signal processing, and it is pervasively used in dimensionality reduction and pattern, etc. [26]. The original dataset is transformed into PCA domain, and only the different most significant principal components are conserved. Consequently, trivial information and noise can be eliminated. In [23], a PCA-based scheme was proposed for image denoising by using a moving window for computing the local statistics, from which the local PCA transformation matrix was estimated. This technique applies PCA directly to the noisy image without data selection, and much residual noise and visual artifacts appear in the denoised image. In the LPG-PCA-based technique, Lei Zhang et al. [25] modeled a pixel and its nearest neighbors as a vector variable. The training samples of this variable are chosen by grouping the pixels with similar local spatial structures to the underlying one in the local window. With such an LPG technique, the local statistics of the variables can be accurately calculated so that the image edge structures can be well conserved after shrinkage in the PCA domain for noise suppression. The LPG-PCA scheme proposed in [25] has two stages where the first stage yields an initial image estimation by eliminating the most of the noise and the second stage will further refine the first stage output. The two stages have the same procedures except for the noise-level parameter. Since the noise is significantly reduced in the first stage, the LPG precision will be much improved in the second stage so that the final denoised image is visually much better. When compared with WT which uses a fixed basis function for decomposing the noisy image, the proposed LPG-PCA approach is a spatially adaptive image representation so that it can better characterize the image local structures. When compared with BM3D and NLM approaches, the LPG-PCA-based technique proposed in [25] can use a relatively small local window to group the similar pixels for PCA training, yet it yields competitive results with state-of-the-art BM3D algorithm. In this paper we propose a new image denoising approach which combines the dual-tree discrete wavelet transform (DT-DWT)-based denoising approach [12] and the two-stage image denoising technique by PCA with local pixel grouping (LPG) [25]. To evaluate this proposed technique, we have compared it to the two techniques (the DT-DWT-based denoising technique [12] and LPG-PCA [25]). This comparison is based on PSNR and SSIM computation. In the rest of this paper, we first will deal with PCA. Then, we will be interested in DT-DWT [12]. After that we will deal with noise-level estimation from the noisy image proposed in [27, 28]. Then, we will present the two denoising techniques proposed in [12, 25]. After that we will detail the proposed image denoising technique, and finally we will give results and evaluation. ## 2. Principal component analysis (PCA) Let X=x11x12x1nx21x22x2n.xn1.xn2.xnnE1 Xi=x11x12x1nE2 representing the sample vector of xi. The mean value of Xiis computed as follows: μi=1nj=1nXij.E3 And then, the sample vector is centralized as follows: X¯i=Xiμi=x¯i1x¯i2x¯inE4 with x¯ij=xijμi. Accordingly, the centralized matrix of Xis expressed as follows: X¯=Xiμi=XiTX2Tx¯mTT.E5 Finally, the covariance matrix of the centralized dataset is computed as follows: Ω=1nX¯X¯T.E6 The PCA aim consists in finding an orthonormal transformation matrix Pin order to de-correlate X¯, i.e., Y¯=PX¯, so that the matrix of covariance of Y¯is diagonal. Since the covariance matrix Ωis symmetrical; therefore, it can be expressed as follows: Ω=ϕΛϕTE7 where Λ=diagλ1λ2λmis the diagonal eigenvalue matrix with λ1λ2λmand ϕ=ϕ1ϕ2ϕmrepresents the m×morthonormal eigenvector matrix. The terms λ1,λ2,,λmand ϕ1,ϕ2,,ϕmare, respectively, the eigenvalues and the eigenvectors of Ω. By setting the matrix P as follow: P=ϕT,E8 X¯can be de-correlated, i.e., Λ=1nY¯Y¯Tand Y¯=PX¯. An interesting property of PCA is that it fully de-correlates the original dataset X¯. In general, the signal energy will concentrate on a small subset of the PCA transformed dataset, whereas the noise energy will evenly spread over the whole dataset. Consequently, the noise and signal can be better distinguished in the domain of PCA. ## 3. LPG-PCA denoising algorithm ### 3.1. Modeling of spatially adaptive PCA denoising In [25] and in previous literature, the noise υdegrading the original image Iis supposed to be white and additive with standard deviation σand zero mean, and the noisy image,Iυ, is expressed as follows: Iυ=I+υE9 Both noise υand image Iare supposed to be uncorrelated. The purpose of image denoising consists in estimating the clean image Ifrom Iυ, and the estimate is denoted by Î. The latter is expected to be as close as possible to the original image, I. Two quantities describe an image pixel. Those quantities are its intensity and the spatial location. However, the image local structure is represented as a set of neighboring pixels at different intensity levels. As most of the semantic information of an image is conveyed by its edge structures, edge conservation is highly required in denoising of this image. In [25], the pixel and its nearest neighbors were modeled as a vector variable and perform denoising on the vector instead of the single pixel. According to Figure 1, for an underlying pixel to be denoised, Lei Zhang et al. [25] set a K×Kwindow centered on it, and denoted by x=x1xmT,m=K2, the vector contains all components within the window. As the observed image is the original image degraded by the noise, they denote the noisy vector of xby [25]: xυ=x+υE10 xwhere υ=υ1υmT, xkυ=xk+υk,k=1,,m, and xυ=x1υxmυ. For estimating xfrom the noisy vector, xυ, they are viewed as (both noiseless and noisy) vector variables so that one can use the statistical techniques such as PCA. For canceling the noise from the noisy vector xυby using PCA, a set of training samples of xυis needed so that the covariance matrix of xυand therefore the PCA transformation matrix can be computed. For this aim, Lei Zhang et al. [25] have used an L×LL>Ktraining block centered on xυin order to find the training samples, as illustrated in Figure 1. The simplest manner consists in taking the pixels in each possible K×Kblock within the L×Ltraining block as the samples of noisy variable xυ. In this way, for each component xkυof xυ, there are in total LK+12training samples. Though, there can be very different blocks from the given central K×Kblock in the L×Ltraining window, taking all the K×Kblocks as the training samples of xυwill lead to inaccurate estimation of the matrix of covariance of xυ, which subsequently leads to inaccurate estimation of the PCA transformation matrix and finally results in much residual noise. Consequently, selecting and grouping the training samples that are similar to the central K×Kblock are required before image denoising by applying the PCA transform. ### 3.2. Local pixel grouping (LPG) Grouping the training samples similar to the central K×Kblock in the L×Ltraining window is certainly a problem of classification, and therefore different grouping techniques such as correlation based matching, block matching, K-means clustering, etc. can be used based on different criteria. The block matching-based technique may be the simplest but very efficient one, and it is used in [25] for LPG. There are totally LK+12possible training blocks of xυin the L×Ltraining window. We will denote x¯0υin the column sample vector which contains the pixels in the central K×Kblock, and denoted by x¯iυ,i=1,2,,LK+121, the sample vectors correspond to the other blocks. Let x¯iand x¯0be, respectively, the associated noiseless sample vectors of x¯iυand x¯0υ. It can be simply computed that ei=1mk=1mx¯0υkx¯iυk21mk=1mx¯0kx¯0k2+2σ2E11 In Eq. (11), the fact that noise υis white and uncorrelated with signal is used. With Eq. (11), if we have the following condition ei<T+2σ2E12 where Tdesignates a preset threshold, then we select x¯iυas a sample vector of xυ. Assume that nsample vectors of xυare selected including the central vector x¯0υ. For the expression convenience, these sample vectors are denoted as x¯0υ,x¯1υ,,x¯n1υ. The noiseless counterparts of those vectors are denoted as x¯0,x¯1,,x¯n1, accordingly. Then, the training dataset for xυis constituted by. Xυ=x¯0υx¯1υx¯n1υE13 The noiseless counterpart of Xυis designated as X=x¯0x¯1x¯n1. To insure the existence of enough samples in calculating the PCA transformation matrix, ncould not be too small. Practically speaking, it will be used in denoising at least cmtraining samples of xυ, with c=8˜10. That is to say that in case of n<cm, we will use the best cm-matched samples in PCA training. Often, the best cm-matched samples are robust for estimating the local statistics of image, and this operation makes the algorithm more stable for computing the PCA transformation matrix. The problem now is how to estimate from the noisy data Xυ, the noiseless dataset X. Once this dataset X is estimated, the central block and therefore we can extract the central underlying pixel. Such procedure is applied to each pixel, and then the entire image Iυcan be denoised. The LPG-PCA-based denoising is detailed in [25], and the denoising refinement in the second stage will be detailed in the next part of this paper. ### 3.3. Denoising refinement in the second stage Most of the noise will be suppressed by employing the denoising procedures described in [25]. However, there is still much visually unpleasant residual noise in the denoised image. Figure 2 shows an example of image denoising where (a) is the original image Cameraman, (b) the noisy version of it with PSNR=22.1dBand σ=20, and ais the denoised image with PSNR=29.8dBby employing the LPG-PCA technique proposed in [25]. Despite the remarkable improvement of PSNR, one can still see much residual noise in the denoising output. There are mainly two reasons for the residual noise. First, because of the strong noise in the original dataset Xυ, the covariance matrix Ωx¯υis much noise degraded, which leads to estimation bias of the PCA transformation matrix and therefore deteriorates the denoising performance; second, the strong noise in the original dataset will also lead to LPG errors, which therefore results in estimation bias of the covariance matrix Ωx¯υor Ωx¯. Consequently, it is essential to further process the denoising output for a better image denoising. As the noise has been much canceled in the first round of LPG-PCA denoising, the LPG correctness and the estimation of Ωx¯υor Ωx¯can be much ameliorated with the denoised image. Consequently, the LPG-PCA denoising procedure for the second round for enhancing the denoising results. According to this figure, we remark that the visual quality is much ameliorated after the second round of refinement. As shown in Figure 3, in the second round of LPG-PCA denoising technique [25], the noise-level should be updated. Denote by Î, the denoised version of the noisy image in the first stage. The Îcan be expressed as Î=I+υsE14 where υsis the residual noise in the denoised image. The level estimation of υsis denoted by σs=Eυs2and inputs it to the second round of LPG-PCA denoising algorithm. In [25], σsis estimated based on the difference between Iυand Î. Let I˜=IυÎ=υυsE15 We have EI˜2=Eυ2+Eυs22Eυυs=σ2+σs22Eυυs. The υscan be seen as the smoothed version of noise υ, and it mainly contains the low-frequency component of υ. Let υ˜=υυsbe their difference, and υ˜mainly contains the high-frequency component of υ. There is Eυυs=Eυ˜υs+Eυs2. Generally, compared to Eυs2, Eυ˜υsis much smaller, and we can obtain the following approximation: EυυsEυs2=σs2. Thus, from EI˜2=σ2+σs22Eυυs, we obtain σs2σ2EI˜2E16 In practice, υswill include not only the residual noise but also the estimation error of noiseless image I. Consequently, in the implementation [25], of Lei Zhang et al. let σs=Csσ2EI˜2E17 where Csis a constant satisfying Cs<1. In [25], Lei Zhang et al. found experimentally that setting Csaround 0.35can lead to satisfying denoising results for most of the testing images. Figure 2d shows the denoised image (PSNR=30.1dB) after the second round of the LPG-PCA denoising technique [25]. Although the PSNRis not too much ameliorated on this image, we can remark clearly that the visual quality is much ameliorated by efficiently eliminating the residual noise obtained from the first round of denoising. ## 4. The proposed image denoising technique As previously mentioned, in this chapter, a new image denoising technique is proposed. It combines two denoising approaches. The first one is a dual-tree discrete wavelet (DT-DWT)-based denoising method [12], and the second one is a two-stage image denoising by PCA with LPG [25]. This proposed technique consists at the first step in applying the first denoising approach [12] to the noisy image in order to obtain the first estimation of the clean image (the cleaned image). Then, we estimate the level of noise corrupting the clean image. The cleaned image, the noisy image, and the noise-level are used for applying the second approach which is two-stage image denoising by PCA with LPG [25]. Figure 4 illustrates the block diagram of the proposed technique. According to this figure, the first step of the proposed image denoising technique consists in applying the first denoising approach based on DT-DWT [12] to the noisy image, Ib, in order to obtain a first estimate of the clean image, Id, and then estimates the noise-level, υ, from Ib. The noisy images Ib, Id, and υconstitute the inputs of the second image denoising system proposed in [25, 27]. The output of this system and the overall proposed one are the final denoised image, Id1. In the image denoising system (LPG-PCA denoising) proposed in [25, 27], Lei Zhang et al. have used the clean image, I, and the noise-level, υ, as the inputs of this system [27]. However, only the noisy image, Ib, is available, and for this raison, we have used in our proposed technique the denoising approach based on DT-DWT [12] in order to obtain a cleaned image, Id, which is then used as a clean image, I. This clean image is one important input of the denoising system proposed by Lei Zhang et al. [27]. In the following two subsections, we will be interested in the first image denoising approach based on DT-DWT [12] and the technique of noise-level estimation proposed in [28, 29], from the noisy image, Ib. ## 5. The Hilbert transform The Hilbert transform of a signal corresponds in Fourier plane to a filter with complex gain, isignγ[30]. This is corresponding to an impulse response vp1πtwhere vpis the principal value in Cauchy sense [30]. The analytic signal is then constructed as follows: zt=xt+iHxt=xt+iπvp+xstsdsE18 This analytic signal has only positive frequencies. The Hilbert transform of a real signal is also real. Instead of considering the Hilbert transform of the wavelet (which is defined through the associated filters), we can consider the Hilbert transform of the signal, and the analysis is performed with initial wavelet because we have fHψa,t=Hfψa,t[30]. The latter equality is justified by the fact that the Hilbert transform is considered as a linear filter [30]. Therefore, we have the following scheme: let Xnbe the signal to be analyzed with real wavelet by using the Mallat algorithm in order to obtain the wavelet coefficients, d1jk. Then, we analyze HXnwith the same wavelet, and we obtain the coefficients d2jk. Then, we construct the complex coefficients: dcomplex=d1jk+id2jk. As follows, the magnitude of those coefficients is named Hilbert magnitude. The drawbacks of this method are as follows: The support of the Hilbert transform of a wavelet having a compact support is infinite. There is a computing disadvantage because the cost of two wavelet transforms plus the Hilbert Transform. Theoretically speaking, it is possible to limit the drawback of the support of the Hilbert transform of the wavelet by using an approximate of the Hilbert transform. However, this approximation cannot be optimized for all scales [30]. One solution of this problem has been proposed by Kingsbury: the dual tree [30]. ## 6. Dual-tree complex wavelet transform The dual tree complex wavelet (DT-CWT) permits to make signal analysis by using two different trees of DWT, with filters selected in such manner to obtain approximately a signal decomposition using analytic wavelet [30]. Figure 5 shows a tree of DT-CWT, using two different filter banks: h1 and g1 are high-pass filters of the first and second trees, and h0 and g0 are low-pass filters of the same two trees [30]. The first tree gives the coefficients of the real part, dr(j,k), and the second tree gives those of the imaginary part, di(j,k). After that, we construct the complex coefficients dcomplex(j,k) = dr(j,k) + i di(j,k). The magnitude of those coefficients is named dual-tree magnitude [30]. This Q-shift dual-tree complex wavelet transform (Figure 5) is in 1D. Synthesis of the filters adapted to this structure has been performed by many research works. Particularly, Kingsbury [30] proposed some filters named Q-shift. In [30], some filters are employed, and their utilization is equivalent to the signal analysis by wavelets illustrated in Figure 6. We can see in this figure that the wavelet corresponding to the imaginary part tree is very near to the Hilbert transform of the wavelet corresponding to the real part tree [30]. Finally, the utilization of this structure requires an operation of prefiltering; it means that the filters used in the first step are not the same as those used in the next step. The advantages of this method compared to the simple Hilbert transform (Section 5) are [30]: • A lower computation cost ( Two DWT), • An approximate of the Hilbert transform, is optimized for each scale, • The possibility of an exact reconstruction is preserved. The principal drawback of the DT-CWT is the non-possibility of the use of the well-known wavelets of the DWT (Daubechies wavelet, Spline, etc.) and therefore the non-possibility to choose the number of vanishing moments (all the Q-shift filter gives wavelets with two vanishing moments). ### 6.1. 2D DT-CWT To explain how the DT-CWT produces oriented wavelets, consider the 2D wavelet ψxy=ψxψyassociated with the row-column implementation of the wavelet transform, where ψxis a complex wavelet (approximately analytic) and is expressed as follows [31]: ψx=ψhx+iψgx.E19 Therefore, we obtain the following expression of ψxy: ψxy=ψhx+iψgxψhy+iψgy=ψhxψhyψgxψgy+iψgxψhy+ψhxψgyE20 The following idealized diagram (Figure 7) illustrates the Fourier spectrum support of this complex wavelet [31]. Since the (approximately) 1D wavelet spectrum is supported on just one side of the frequency axis, the complex 2D wavelet (ψxy) spectrum is supported in just one quadrant of the 2D frequency plane. That is why the complex 2D wavelet, ψxy, is oriented. If the real part of this complex wavelet is taken, then the sum of two separable wavelets is obtained: RealPartψxy=ψhxψhyψgxψgy.E21 Since the real function spectrum should be symmetric with respect to the origin, then the spectrum of this real wavelet is supported in two quadrants of the 2D frequency plane (Figure 8). Unlike the real separable wavelet, the support of the spectrum of this real wavelet has not the checkerboard artefact and consequently this real wavelet (illustrated in the second panel of Figure 11), is oriented at 45. It is deserving mentioning that this construction is depending on ψx=ψhx+iψgxbeing (approximately) analytic or equivalently on ψgxbeing approximately the Hilbert transform of ψhx(ψgxHψhx). Note that ψhxψhyis the sub-band HH of a separable 2D real wavelet transform implemented employing the filters hn0h1n. The term ψgxψgyis also the sub-band HH which is obtained from the application of a real separable wavelet transform. The latter is implemented by employing the filters gn0g1n. To have a real 2D wavelet oriented at +45, we consider now the complex 2D wavelet ψ2xy=ψxψ¯ywhere ψ¯yis the complex conjugate of ψyand, as previously mentioned, ψxis approximately the analytic wavelet, ψhx+iψgx. Therefore, we have ψ2xy=ψhx+iψgxψhyiψgy=ψhxψhy+ψgxψgy+iψgxψhy+ψhxψgyE22 The support in the 2D frequency plane of this complex wavelet spectrum is illustrated in Figure 9. As above, the spectrum of the complex wavelet, ψ2xy, is supported in just one quadrant of the 2D frequency plane. If the real part of this complex wavelet is taken, then we have RealPartψ2xy=ψhxψhy+ψgxψgy.E23 The spectrum of which is supported in two quadrants of the 2D frequency plane as illustrated in Figure 10. Again, neither the wavelet nor the spectrum of this real wavelet has the spectrum of the checkerboard artifact. This real 2D wavelet is oriented at +45as illustrated in the fifth panel of Figure 11. To have four more oriented real 2D wavelets, one can repeat this procedure on the complex wavelets expressed as follows: ϕxψy, ψxϕy, ϕxψy, and ψxϕ¯ywhere we have ψx=ψhx+iψgxE24 ϕx=ϕhx+iϕgxE25 By taking the real part of each of these wavelets, one can obtain four real oriented 2D wavelets. Moreover, the two already obtained in Eqs. (21) and (23). Precisely, we have six wavelets expressed as follows: ψixy=12ψ1,ixyψ2,ixyE26 ψi+3xy=12ψ1,ixy+ψ2,ixyE27 For i=1,2,3,the two separable 2-D wavelet bases are expressed as follow: ψ1,1xy=ϕhxψyh,ψ2,1xy=ϕgxψyg,E28 ψ1,2xy=ψhxϕyh,ψ2,2xy=ψgxϕyg,E29 ψ1,3xy=ψhxψyh,ψ2,3xy=ψgxψyg,E30 The normalization factor 1/2is used only so that the sum/difference operation constitutes an orthonormal operation. In Figure 11 the six real oriented wavelets derived from a pair of typical wavelets satisfying ψgxHψhxare illustrated. Compared to separable wavelets, these six non-separable wavelets succeed in isolating different orientations. Each of these six wavelets are aligned with a specific direction. Moreover, no checkerboard effect appears. In addition, they cover more distinct orientations than the separable wavelets obtained from the application of DWT. Moreover, since the sum/difference operation is orthonormal, the wavelet set is obtained from integer translates and dyadic dilations from a frame [31]. ## 7. The technique of Noise-level estimation In many image processing applications, the noise-level is an important parameter. For example, the performance of an image denoising technique can be much degraded due to the poor noise-level estimation. The most available denoising techniques simply supposed that the noise-level is known that largely prevents them from practical employment. Furthermore, even with the given true noise-level, those denoising techniques still cannot achieve the best performance, precisely for scenes with rich texture. Xinhao Liu et al. [28, 29] have proposed a technique of patch-based noise-level estimation, and they suggested that the noise-level parameter should be tuned according to the complexity of the scene. Their approach [28, 29] includes the process of selecting low-rank patches without high-frequency components from a single noisy image. Then, the noise-level was estimated from the selected patches employing principal component analysis. Because the exact noise-level does not always provide the best performance for non-blind denoising. Experiments prove that both the stability and precision are superior to the state-of-the-art noise-level estimation technique for different noise-levels and scenes. ## 8. Evaluation criteria In this section, we will evaluate the three techniques which are the proposed image denoising techniques: the first image denoising approach based on DT-CWT [12] and the second denoising approach and the two-stage image denoising by principal component analysis with local pixel grouping [25]. This evaluation is based on the computation of PSNR and SSIM which are detailed in [32]. ## 9. Results and discussion In this work, we have applied the proposed image denoising technique, the first image denoising technique based on DT-CWT [12] and the second denoising technique and the two-stage image denoising by principal component analysis with local pixel grouping [25], on a number of digital images such as “House,” “Lena,” and “Cameraman.” These images are degraded by additive white noise with different values of noise-level, σ. PSNR and SSIM values obtained from the application of the three mentioned techniques on the noisy images are listed in Table 1. TechniqueThe first image denoising technique based on DT-DWT [12]Two-stage image denoising by principal component analysis with local pixel grouping [25]: first stageTwo-stage image denoising by principal component analysis with local pixel grouping [25]: second stageThe proposed technique House (σ = 10)34.7138 (0.8778)35.4 (0.9003)35.6 (0.9012)36.1223 (0.9130) House (σ = 20)31.6671 (0.8253)31.8 (0.8084)32.5 (0.8471)33.0828 (0.8677) House (σ = 30)29.8494 (0.7877)29.3 (0.7225)30.4 (0.8185)31.2095 (0.8393) House (σ = 40)28.5744 (0.8084)27.3(0.6243)28.9 (0.7902)29.7344 (0.8084) Lena (σ = 10)33.6767(0.9170)33.6 (0.9218)33.7 (0.9243)34.0765 (0.9271) Lena (σ = 20)30.0002 (0.8539)29.5 (0.8346)29.7 (0.8605)30.5415 (0.8765) Lena (σ = 30)27.9859 (0.8016)27.1 (0.7441)27.6 (0.8066)28.3595 (0.8292) Lena (σ = 40)26.6364 (0.7585)25.4 (0.6597)26.0 (0.7578)26.8566 (0.7882) Cameraman (σ = 10)32.7481 (0.8989)33.9 (0.9261)34.1 (0.9356)33.6141 (0.9241) Cameraman (σ = 20)28.9990 (0.8175)29.8 (0.8320)30.1 (0.8902)29.7184 (0.8575) Cameraman (σ = 30)27.1022 (0.7641)27.3 (0.7395)27.8(0.8558)27.8174 (0.8151) Cameraman (σ = 40)25.7866 (0.7241)25.5 (0.6393)26.2 (0.8211)26.4954 (0.7826) Monarch (σ = 10)32.9907 (0.9369)34.0 (0.9522)34.2 (0.9594)34.0698 (0.9553) Monarch (σ = 20)29.1114 (0.8811)29.6 (0.8859)30.0 (0.9202)30.0384 (0.9145) Monarch (σ = 30)27.0058 (0.8346)27.0 (0.8071)27.4 (0.8769)27.7209 (0.8735) Monarch (σ = 40)25.5973 (0.7950)25.2 (0.7267)25.9 (0.8378)26.0832 (0.8293) Peppers (σ = 10)33.4942 (0.9056)33.4 (0.8909)33.3 (0.8943)33.7904 (0.9189) Peppers (σ = 20)29.8124 (0.8424)29.9 (0.8177)30.1 (0.8413)30.5252 (0.8743) Peppers (σ = 30)27.7810 (0.7924)27.5 (0.7332)27.9 (0.7973)28.4765 (0.8356) Peppers (σ = 40)26.4045 (0.7507)25.9 (0.6447)26.7(0.7648)26.9883 (0.8013) Paint (σ = 10)32.5488 (0.9165)33.5 (0.9280)33.6 (0.9311)33.3567 (0.9276) Paint (σ = 20)28.5980 (0.8416)26.8 (0.7467)29.5 (0.8683)29.4699 (0.8648) Paint (σ = 30)26.6067 (0.7817)26.8 (0.7467)27.2 (0.8088)27.2540 (0.8077) Paint (σ = 40)25.2968 (0.7330)25.0 (0.6590)25.6 (0.7569)25.6389 (0.7560) ### Table 1. PSNR (dB) and SSIM results of the denoised images for the different techniques. These obtained results (Table 1) show clearly that the proposed technique outperforms the denoising technique based on DT-CWT proposed in [12] and the denoising approach based on LPG-PCA [25]. Figures 1215 show four examples of image denoising using the proposed technique. These figures show that the noise corrupting the original images is sufficiently suppressed. Moreover, the proposed technique permits to obtain denoised images with good perceptual quality. In each of these figures, the image (c) is obtained after the first denoising stage in the proposed technique. In this image (c), some noise is still existing, whereas it is considerably reduced into the image (d) obtained after the second denoising step. In the following subsection, we will give the results obtained by applying the proposed technique, the LPG-PCA-based denoising technique [25, 27] and the DT-DWT-based denoising one to a number of grayscale images. Those results are in terms of SNR and MSE and are listed in Table 2. TechniqueThe first image denoising technique based on DT-DWT [12]Two-stage image denoising by principal component analysis with local pixel grouping [25]The proposed technique House (σ = 10)SNR = 78.00, MSE = 21.96SNR = 79.41, MSE = 15.88SNR = 79.44, MSE = 15.75 House (σ = 20)SNR = 74.95, MSE = 44.29SNR = 76.37, MSE = 31.97SNR = 76.38, MSE = 31.92 House (σ = 30)SNR = 73.14, MSE = 67.31SNR = 74.50, MSE = 49.21SNR = 74.50, MSE = 49.21 House (σ = 40)SNR = 71.86, MSE = 90.28SNR = 73.02, MSE = 69.16SNR = 73.02, MSE = 69.14 Lena (σ = 10)SNR = 74.67, MSE = 27.88SNR = 75.28, MSE = 24.17SNR = 75.28, MSE = 24.19 Lena (σ = 20)SNR = 70.99, MSE = 65.02SNR = 71.53, MSE = 57.40SNR = 71.55, MSE = 57.19 Lena (σ = 30)SNR = 68.97, MSE = 103.39SNR = 69.35, MSE = 94.87SNR = 69.37, MSE = 94.36 Lena (σ = 40)SNR = 67.62, MSE = 141.07SNR = 67.85, MSE = 134.09SNR = 67.87, MSE = 133.35 Cameraman (σ = 10)SNR = 75.33, MSE = 34.53SNR = 76.19, MSE = 28.29SNR = 76.23, MSE = 28.06 Cameraman (σ = 20)SNR = 71.58, MSE = 81.88SNR = 72.30, MSE = 69.38SNR = 72.33, MSE = 68.80 Cameraman (σ = 30)SNR = 69.68, MSE = 126.72SNR = 70.39, MSE = 107.48SNR = 70.45, MSE = 106.00 Cameraman (σ = 40)SNR = 68.36, MSE = 171.56SNR = 69.07, MSE = 145.72SNR = 69.14, MSE = 143.51 Monarch (σ = 10)SNR = 74.94, MSE = 32.65SNR = 76.02, MSE = 25.47SNR = 76.01, MSE = 25.55 Monarch (σ = 20)SNR = 71.06, MSE = 79.78SNR = 71.99, MSE = 64.45SNR = 71.98, MSE = 64.53 Monarch (σ = 30)SNR = 68.96, MSE = 129.56SNR = 69.67, MSE = 109.89SNR = 69.68, MSE = 109.62 Monarch (σ = 40)SNR = 67.55, MSE = 179.20SNR = 68.01, MSE = 161.05SNR = 68.03, MSE = 160.25 Peppers (σ = 10)SNR = 76.07, MSE = 29.08SNR = 76.65, MSE = 25.43SNR = 76.63, MSE = 25.56 Peppers (σ = 20)SNR = 72.39, MSE = 67.89SNR = 73.10, MSE = 57.61SNR = 73.12, MSE = 57.43 Peppers (σ = 30)SNR = 70.36, MSE = 108.38SNR = 71.05, MSE = 92.34SNR = 71.07, MSE = 92.02 Peppers (σ = 40)SNR = 68.98, MSE = 148.80SNR = 69.57, MSE = 130.09SNR = 69.58, MSE = 129.58 ### Table 2. SNR (dB) and MSE results of the denoised images for the different techniques. Those results show that the proposed technique outperforms the two other techniques (the LPG-PCA-based denoising technique [25, 27] and the DT-DWT-based denoising one [12]). In fact the proposed techniques are the highest values of SNR and lowest values of MSE. ## 10. Conclusion In this chapter, a new image denoising technique is proposed. It combines two denoising approaches. The first one is a dual-tree discrete wavelet transform (DT-DWT)-based denoising technique, and the second one is a two-stage image denoising by principal component analysis with local pixel grouping (LPG-PCA). The first step of this proposed technique consists in applying the first approach to the noisy image in order to obtain a first estimate of the clean image. Then, we estimate the level of noise corrupting the original image. This estimation is performed by using a method of noise estimation from noisy images. The third step of the proposed technique consists in using this first clean image estimation, the noisy image, and this noise-level estimate as inputs of the second image denoising system (LPG-PCA-based image denoising) in order to obtain the final estimation of the clean image. A comparative study is performed between the proposed image denoising technique and two others denoising approaches where the first is based on DT-DWT and the second is based on LPG-PCA. This study is based on PSNR and SSIM computations, and the obtained results show that the proposed technique outperforms the two other denoising approaches. We also computed SNR (Signal to Noise Ratio) and MSE (Mean Square Error) and the obtained results also show that the proposed technique outperforms the others techniques. ## Acknowledgments We would like to thank all the people who contributed in some way to this work which was supported by the CRTEn (Center of Research and Technology of Energy) of Borj Cedria, Tunisia, and the Ministry of Higher Education and Scientific Research. chapter PDF Citations in RIS format Citations in bibtex format ## More © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ## How to cite and reference ### Cite this chapter Copy to clipboard Mourad Talbi and Med Salim Bouhlel (October 3rd 2018). Wavelets and LPG-PCA for Image Denoising, Wavelet Theory and Its Applications, Sudhakar Radhakrishnan, IntechOpen, DOI: 10.5772/intechopen.74453. Available from: Next chapter
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8727298974990845, "perplexity": 1136.8859655469514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911896.73/warc/CC-MAIN-20200710175432-20200710205432-00414.warc.gz"}
https://www.zbmath.org/?q=ci%3A0889.05032+py%3A1998
× # zbMATH — the first resource for mathematics On Hadamard property of a certain class of finite groups. (English) Zbl 0911.20008 A finite group $$G$$ of order $$8n$$ with a central involution $$e^*$$ is called a Hadamard group if $$G$$ contains a transversal $$D$$ with respect to $$\langle e^*\rangle$$ such that $$| D\cap Dr|=2n$$ for every element $$r$$ of $$G$$ outside $$\langle e^*\rangle$$. Such a transversal is called a Hadamard subset. The cyclic group of order 4 is considered as a Hadamard group. If $$r^2=e^*$$, then we have $$| D\cap Dr|=2n$$ for every transversal $$D$$ of $$G$$ with respect to $$\langle e^*\rangle$$. Let $$A(e^*)$$ be the set of elements $$r$$ of $$G$$ such that $$r^2=e^*$$ and $$a(e^*)=| A(e^*)|$$. Then if $$a(e^*)$$ is large enough compared with $$| G|$$ we may expect $$G$$ to be Hadamard. In the present paper one assumes that $$a(e^*)\geq 4n$$ and we investigate the Hadamard property of $$G$$ (of order $$8n$$). Let $$T(G)$$ be the sum of the irreducible characters of $$G$$. Then $$T(G)\leq a(e^*)$$, by the Frobenius-Schur formula on the number of involutions. This means that $$2T(G)\geq| G|$$, and such groups were classified by the reviewer and K. G. Nekrasov [see Ya. G. Berkovich and E. M. Zhmud, Characters of finite groups, Part 1. Transl. Math. Monogr. 172, Am. Math. Soc., Providence (1998), Chapter 11]. The authors divide their groups into three classes: Class I consists of groups such that $$a(e^*)>4n$$, class II consists of groups $$G$$ such that $$a(e^*)=4n$$ and $$G$$ contains a nonreal element, and class III consists of groups $$G$$ such that $$a(e^*)=4n$$ and all elements of $$G$$ are real. For groups of classes I and II we have that $$2T(G)>| G|$$. Note that generalized quaternion groups are members of class I (it is known that these groups are Hadamard; semidihedral and dihedral groups are not Hadamard). The main result of this paper is the following Proposition 3. Any 2-group of order $$8n$$ with $$a(e^*)\geq 4n$$ is Hadamard. Note that there exists a non-Hadamard group among groups of class II. Namely, class II contains the series of groups $$X(n)$$ presented by $$X(n)=\langle r,s\mid r^{2n}=s^4=1$$, $$s^{-1}rs=r^{-1}\rangle$$. Proposition 4. If $$X(n)$$ is Hadamard, then $$n$$ is a sum of two squares. In particular, $$X(3)$$ is not Hadamard. The class of Hadamard groups is very large. Indeed, as Ito showed, for every 2-group $$P$$ there exists a Hadamard 2-group $$G$$ such that $$P$$ is isomorphic to a subgroup of $$G$$. All known groups of class I are Hadamard. Class I contains the series of groups $$G(n)=\langle r,s\mid r^{2n}=s^2=e^*$$, $$s^{-1}rs=r^{-1}\rangle$$. These groups were investigted in the following papers: A. Baliga and K. J. Horadam [Australas. J. Comb. 11, 123-134 (1995; Zbl 0838.05017)] and D. L. Flannery [J. Algebra 192, No. 2, 749-779 (1997; Zbl 0889.05032)]. There are many open questions on Hadamard groups (for example, abelian Hadamard groups are not classified). ##### MSC: 20C15 Ordinary representations and characters 05B20 Combinatorial aspects of matrices (incidence, Hadamard, etc.) 20D60 Arithmetic and combinatorial problems involving abstract finite groups 05B10 Combinatorial aspects of difference sets (number-theoretic, group-theoretic, etc.) Full Text: ##### References: [1] Baliga, A.; Horadam, K.J., Cocyclic Hadamard matrices overZ_t×Z22, Australas. J. comb., 11, 123-134, (1995) · Zbl 0838.05017 [2] Berkovich, Y.; Zhmud’, E., Characters of finite groups, part I, translations of mathematical monographs, (1997), Amer. Math. Soc [3] Cho, J.R.; Ito, N.; Kim, P.S.; Sim, H.S., Hadamard 2-groups with arbitrarily large derived length, Australas. J. comb., 16, 83-86, (1997) · Zbl 0892.20015 [4] Feit, W., Characters of finite groups, (1967), Benjamin New York/Amsterdam · Zbl 0166.29002 [5] Flannery, D.L., Cocyclic Hadamard matrices and Hadamard groups are equivalent, J. algebra, 192, 749-779, (1997) · Zbl 0889.05032 [6] Ito, N., Some results on Hadamard groups, (), 149-155 · Zbl 0864.05021 [7] Ito, N., Note on Hadamard groups and difference sets, Australas. J. comb., 11, 135-138, (1995) · Zbl 0826.05010 [8] Ito, N., Remarks on Hadamard groups, Kyushu J. math., 50, 83-91, (1996) · Zbl 0889.05033 [9] Ito, N., Remarks on Hadamard groups, II, Meijo U. sci. rep. fac. sci. eng., 37, 1-7, (1997) · Zbl 0892.20016 [10] Ito, N., On Hadamard groups, III, Kyushu J. math., 51, 1-11, (1997) [11] Nekrasov, K.G.; Berkovich, Y.G., Finite groups with large sums of degrees of irreducible characters, Publ. math. debrecen, 33, 333-354, (1986) · Zbl 0649.20005 [12] K. G. Nekrasov, Non-nilpotent finite groups with large sums of degrees of irreducible complex characters, Rep. Kalinin Univ. 1987, 66, 70 · Zbl 0744.20014 [13] K. G. Nekrasov, Finite groups with large sums of degrees of irreducible complex characters, II, 1987, 118, 130 · Zbl 0744.20014 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9291046857833862, "perplexity": 1026.3224433357184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610196.46/warc/CC-MAIN-20210613161945-20210613191945-00111.warc.gz"}
https://labs.tib.eu/arxiv/?author=I%20.%20J%20.%20D.%20MacGregor
• ### Measurement of the neutron F2 structure function via spectator tagging with CLAS(1110.2770) May 14, 2012 hep-ex, nucl-ex We report on the first measurement of the F2 structure function of the neutron from semi-inclusive scattering of electrons from deuterium, with low-momentum protons detected in the backward hemisphere. Restricting the momentum of the spectator protons to < 100 MeV and their angles to < 100 degrees relative to the momentum transfer allows an interpretation of the process in terms of scattering from nearly on-shell neutrons. The F2n data collected cover the nucleon resonance and deep-inelastic regions over a wide range of Bjorken x for 0.65 < Q2 < 4.52 GeV2, with uncertainties from nuclear corrections estimated to be less than a few percent. These measurements provide the first determination of the neutron to proton structure function ratio F2n/F2p at 0.2 < x < 0.8 with little uncertainty due to nuclear effects.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9672979116439819, "perplexity": 1563.5857364712901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703557462.87/warc/CC-MAIN-20210124204052-20210124234052-00599.warc.gz"}
http://mathhelpforum.com/math-topics/47993-how-do-i-do-print.html
# How do I do this? • September 7th 2008, 01:56 AM juliak How do I do this? • September 7th 2008, 02:06 AM Moo Hello, Do you know much about similar triangles ? Since BD and AE are parallel, angles CBD and CAE are equal and angles CDB and CEA are equal. angle ACE obviously equals angle BCD. --> BCD and CAE are similar Thus $\frac{BC}{AC}=\frac{CD}{CE}=\frac{BD}{AE}$ • September 7th 2008, 02:27 AM juliak I understand all that stuff about which angles are equal, but um how come you're dividing them? • September 7th 2008, 02:31 AM Moo Quote: Originally Posted by juliak I understand all that stuff about which angles are equal, but um how come you're dividing them? I'm dividing the length of the sides ! It's a property of similar triangles : Similarity (geometry - Wikipedia, the free encyclopedia), see the second paragraph "similar triangles", there are equalities with the lengths of the sides. • September 7th 2008, 03:04 AM juliak Huh? :S >_< I'm very confused. The wikipedia article does not explain why/how you divide which lengths of the triangle with each other, it just says that once you know everything congruent, it is possible to deduce proportionalities between corresponding sides of the two triangles, such as the following (and then list out a bunch). Why are you dividing the lengths? And how do you know which lengths to divide with each other? And when you put the numbers in, is it like this? 6/(x+8)=6/(x+8)=x/(x+5) If so, how would that work out? >_< • September 7th 2008, 03:25 AM Moo Quote: Originally Posted by juliak The wikipedia article does not explain why/how you divide which lengths of the triangle with each other, it just says that once you know everything congruent, it is possible to deduce proportionalities between corresponding sides of the two triangles, such as the following (and then list out a bunch). Why are you dividing the lengths? And how do you know which lengths to divide with each other? Okay. See this table of corresponding angles and sides : $\begin{array}{|c|c|} \hline \text{Triangle BCD} & \text{Triangle CAE} \\ \hline \angle BCD & \angle ACE \\ \hline \angle CBD & \angle CAE \\ \hline \angle CDB & \angle CEA \\ \hline \hline \hline CB & CA \\ \hline CD & CE \\ \hline BD & AE \\ \hline \end{array}$ So it seems to be ok for the correspondance of the angles (you know which one equals which other). Now, see what sides corresponding angles will intercept. This is what is said in the wikipedia. After that, what wikipedia implicitly says is that the ratio of lengths of corresponding sides is a constant. This is why I wrote the equalities above : $\frac{BC}{AC}=\frac{CD}{CE}=\frac{BD}{AE} $ Quote: 6/(x+8)=6/(x+8)=x/(x+5) If so, how would that work out? >_< Yes it is ! Well, see $\frac{6}{x+8}=\frac{x}{x+5}$ This gives $6(x+5)=x(x+8)$ Develop and solve the quadratic ;) • September 8th 2008, 04:42 PM juliak Ohhhkay Thank you so much (Rofl)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8326460123062134, "perplexity": 1689.9743159439342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160958.15/warc/CC-MAIN-20160205193920-00287-ip-10-236-182-209.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/26162/what-can-be-said-about-pairs-of-matrices-p-q-that-satisfies-p-1t-circ-p/131405
# What can be said about pairs of matrices P,Q that satisfies $(P^{-1})^T \circ P = (Q^{-1})^T \circ Q$ ? Let $P,Q$ be $n$ by $n$ invertible matrices. Suppose further that $P$ and $Q$ satisfies the following equation : $$(P^{-1})^T \circ P = (Q^{-1})^T \circ Q$$ where $\circ$ denotes the Hadamard matrix product, which is simply the entrywise product. Then what can be said about $P$ and $Q$? More precisely, I want to know if there are additional relations between $P$ and $Q$. For example, one can show that the condition $(P^{-1})^T \circ P = (Q^{-1})^T \circ Q$ implies $$tr(P^{-1}DPE) = tr(Q^{-1}DQE)$$ for all diagonal matrices $D$ and $E$. References in the litterature about matrices of the form $(P^{-1})^T \circ P$ would help too. Thank you, Malik • I guess $P^{-1}$ is the ordinary matrix inverse? Matrix algebras which are also closed under the Hadamard product are called association schemes. There is a monograph by Bannai-Ito on them (a special case is given by algebras generated by strongly regular graphs, there is also a monograph on them by Brouwer, .... which contains chapters on association schemes). Perhaps you should also have a look at Terwilliger pairs which might be relevant for your problem. – Roland Bacher May 27 '10 at 16:18 • Yes, P^{-1} is the ordinary matrix inverse, sorry for the confusion. Thanks for the references, I'll take a look at them. – Malik Younsi May 27 '10 at 17:10 • Related question: mathoverflow.net/questions/63027/…. I'd start from the book suggested in the answer, Horn and Johnson's Topics in Matrix Analysis (not to be confused with Matrix Analysis by the same authors). – Federico Poloni May 22 '13 at 8:44 • Just terminology. The matrix $P^{-T}\circ P$ is the gain array matrix associated with $P$. It was studied by C. R. Johnson & H. Shapiro. – Denis Serre May 22 '13 at 11:44 You might already know this, but I thought it was interesting: The all-ones vector is always an eigenvector of $(P^{-1})^\mathrm{T}\circ P$ with eigenvalue $1$. To see this, note that the $i$th entry of $((P^{-1})^\mathrm{T}\circ P)\mathbf{1}=(P\circ (P^{-1})^\mathrm{T})\mathbf{1}$ is precisely the $(i,i)$th entry of $PP^{-1}=I$ by the definition of matrix multiplication. UPDATE: In hindsight, this is a special case of Theorem DMHP from Dietrich Burde's link. Actually, if $D, E$ are diagonal and $Q=EPD$, then $(P,Q)$ is such a pair. A natural question is whether every pair such that $P^{-T}\circ P=Q^{-T}\circ Q$ is of the form above. But even this is false, because if $P$ is triangular, then $P^{-T}\circ P=I_n$. I take the occasion to mention an open question: define $\Phi(P):=P^{-T}\circ P$ (the gain array). What are the matrices $P$ such that $\Phi^{(k)}(P)\rightarrow I_n$ as $k\rightarrow+\infty$ ? According to Johnson & Shapiro, this is true at least for • Strictly diagonally dominant matrices • Symmetric positive definite matrices On the contrary, $\Phi$ has fixed points, for instance the mean $\frac12(P+Q)$ of two permutation matrices. See Exercises 335, 336, 342, 343 of my blog about Exercises on matrix analysis. Also the following can be said about $P$ and $Q$. Your relation implies that the two matrices $PDP^{-1}$ and $QDQ^{-1}$ have the same diagonal elements for any diagonal matrix $D$. For $n=2$ this is a necessary and suffcient condition for the equality $(P^{-1})^T\circ P=(Q^{-1})^T\circ P$. Here is a Reference: http://linear.ups.edu/jsmath/0200/fcla-jsmath-2.00li101.html
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577937722206116, "perplexity": 202.3754228059079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381803.98/warc/CC-MAIN-20210308021603-20210308051603-00616.warc.gz"}
http://math.stackexchange.com/questions/76241/pi-approximation
# Pi approximation If $d(a,b)=$ largest $n$ such that $a$ and $b$ agree on all digits upto $n$. Eg. $d(\pi,3.14)=3$, $d(0.1234667,0.1234669)=7$. What is the asymptotics of $d(\pi/4,1-1/3+1/5-1/7+\cdots(\pm)1/m)$ as $m\rightarrow\infty$? - So $d(a,b)=\lceil-\log_{10}|a-b|\rceil$? – J. M. Oct 27 '11 at 2:56 This is a complicated way of asking about the rate of convergence of the Gregory series.. – Ragib Zaman Oct 27 '11 at 3:00 No, there can be problem if there are a long sequence of 9's in decimal representation of $\pi$, you can see it here, section 10: ics.org.ru/doc?pdf=440&dir=e – Nurdin Takenov Oct 27 '11 at 3:21 Using the standard estimates for an alternating series of decreasing terms [1], we know the error $$\frac{\pi}{4} - \sum_{k=1}^m \frac{(-1)^{k+1} }{2k-1} = \sum_{k=m+1}^{\infty} \frac{(-1)^{k+1} }{2k-1}$$ has magnitude $\sim \displaystyle \frac{1}{2m}.$ The number of agreed digits is thus asymptotic to the number of leading decimal zeros in the decimal expansion of $\displaystyle \frac{1}{2m},$ which is $\sim \log_{10} 2m .$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9547407627105713, "perplexity": 430.3079225322849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161946.96/warc/CC-MAIN-20160205193921-00274-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/pressure-volume-graph-and-internal-energy.869330/
# Pressure-volume graph and internal energy 1. Apr 28, 2016 ### NihalRi 1. The problem statement, all variables and given/known data There is a pressure volume graph with the gas changes shown, forming a rectangle. The corners are labled A to D starting from the upper left corner heading on to the right until it returns to point A again. The question is at what point, or line is the internal energy of the gas greatest? Like this, except the numbers are different. 2. Relevant equations ΔU = Q-W 3. The attempt at a solution The prior question asked to find the temperatures at each point provided A was at 300K. So I thought that the internal energy of the gas would be maximum when it's at low volume and high pressure, therefore at point A. But at B & C it is hotter meaning it must have gained energy from an external source. So going to B it has expanded and gained energy. But by expanding it does work loosing internal energy right? So how come its temperature got higher? Honestly these graphs just confuse me a lot, I'd really appreciate any help. 2. Apr 28, 2016 ### andrevdh From A to B the gas is expanding, but the pressure is kept constant. Assuming the amount of gas stays the same how can this be achieved? 3. Apr 28, 2016 ### NihalRi The temperature increases. Which means the internal energy increased? Which is strange cause I thought that with expansion the internal energy decreased because work is done by the gas:/ 4. Apr 28, 2016 ### Staff: Mentor This problem statement fails to tell you whether the process is reversible or irreversible, or whether the process is carried out clockwise or counter-clockwise. The process from A to B can be carried out adiabatically and irreversibly at constant pressure, in which case the temperature at B would be lower than at A. So apparently they meant for the expansion to be reversible. 5. Apr 28, 2016 ### NihalRi My mistake, I didn't mention that the entire process is going in a clockwise direction. So that means that A to B is adiabatic? and since Q doesn't change the internal energy decreases? 6. Apr 28, 2016 ### Staff: Mentor It doesn't mean that A to B is adiabatic. If the process is reversible, then, from the ideal gas law, AB has to be isothermal. And, from the ideal gas law, the temperature has to be decreasing along BC. And, from the ideal gas law, CD must be isothermal (but at the lower temperature). And, from the ideal gas law, the temperature has to be increasing along AD. So, where do you think the highest internal energy is? 7. Apr 28, 2016 ### NihalRi If A-B is isothermal , the internal energy wouldn't change thus I think thats where it would be highest. But if these changes were isothermal wouldn't the graph be curved? 8. Apr 28, 2016 ### Staff: Mentor I'm sorry. I messed up. If the pressure is constant along AB, the ideal gas tells us that the temperature would have to be increasing from A to B. If the volume were held constant along BC, the ideal gas law tells us that the temperature would have to be decreasing along BC. If the pressure were held constant along CD, the ideal gas law tells us that the temperature would have to be decreasing along CD. If the volume is held constant long DA, the ideal gas law tells us that the temperature would have to be increasing along DA. 9. Apr 28, 2016 ### NihalRi Its ok, I think I get it. So the greatest temperature which is at B corespond with the highest internal energy. first an amendment, ΔU=Q+W (U-internal energy, Q-energy & W-work) Work is done by the gas meaning W decreases, but there is an increase in temperature leading the internal energy to increase, but for that to happen Q needed to be added to the system. phew I think I might have overcomplicated the whole thing, this whole time I had overlooked the fact that Q had to be added:) that made it look like the equation contradicted itself:) Thank you 10. Apr 28, 2016 ### Staff: Mentor In the version of the 1st law that you wrote, W is the work done on the gas. If W is the work done by the gas, then ΔU=Q-W. 11. Apr 28, 2016 ### andrevdh The gas does positive work in expanding from A to B and thus its internal energy decreases, but since its temperature is increased it also gains energy from this process. The difference between the heat gained and the work done by the gas is the change in its internal energy. 12. Apr 28, 2016 ### Staff: Mentor The wording of this is very confusing. "The gas does positive work in expanding from A to B and thus its internal energy decreases". I think what you mean is that "The gas does positive work in expanding from A to B and thus its internal energy tends to decrease, but this is more than offset by the absorption of heat from the surroundings." 13. Apr 28, 2016 ### NihalRi Can the energy gained and change in internal energy be equal to the work done by the gas? I think thats what happens in this case because the gas only could do an amount of work(expand) equal to the amount of energy it is supplied. The energy added increases the temperature of the gas forcing it to expand in order to maintain the same pressure. 14. Apr 29, 2016 ### andrevdh Could you maybe post a copy or scan of the original question? 15. Apr 29, 2016 ### NihalRi Part b 16. Apr 29, 2016 ### andrevdh The temperature of a gas is a direct indicator of its internal energy. Increasing its internal energy shows up as an increase in its temperature. So doing work on the gas changes the thermal motion of the gas which registers as an change in its internal energy, the same goes for heat. The energy is thus stored in the termal motion of the molecules and determines the potentiality of the gas to do work say like a steam engine. Last edited: Apr 29, 2016 17. Apr 30, 2016 ### lychette all you need to do is apply the gas law PV =constxT. From A to B the pressure is constant (isobaric) and therefore increasing the volume by a factor of 4 means trhat the temp increases by a factor of 4 to 1200K...your answer. in a similar way you have the correct temperatures at points C and D. none of the changes are isothermal or adiabatic ! Work is done BY the gas going from A to B. Change pressure to pascals and volume to m3 and the work done is the area under the line AB . No work is done from B to C or from D to A from C to D work is done ON the gas...the area under the line CD The net work done BY the gas is the area of the loop ABCD The max internal energy is at the max temp...point B Draft saved Draft deleted Similar Discussions: Pressure-volume graph and internal energy
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063934683799744, "perplexity": 514.2810849392276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00132.warc.gz"}
https://collegephysicsanswers.com/openstax-solutions/what-direction-velocity-negative-charge-experiences-magnetic-force-shown-each
Question What is the direction of the velocity of a negative charge that experiences the magnetic force shown in each of the three cases in Figure 22.51, assuming it moves perpendicular to $B$. Question Image 1. right 2. into the page 3. down Solution Video
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9761329293251038, "perplexity": 629.5644305330311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400223922.43/warc/CC-MAIN-20200925084428-20200925114428-00017.warc.gz"}
https://socratic.org/questions/what-is-the-equation-of-the-line-perpendicular-to-y-9-16x-that-passes-through-12
Algebra Topics # What is the equation of the line perpendicular to y=-9/16x that passes through (-12,5) ? Apr 9, 2016 $y = \frac{16}{9} x + \frac{79}{3}$ #### Explanation: The given line is $y = \frac{- 9}{16} x$ Two Lines are perpendicular if ${m}_{1} \times {m}_{2} = - 1$ Where - ${m}_{1} :$ slope of the given line ${m}_{2} :$ slope of the required line Then ${m}_{2} = - 1 \times \frac{1}{m} _ 1$ ${m}_{2} = - 1 \times \frac{16}{- 9} = \frac{16}{9}$ The equation of the required line is - $y - {y}_{1} = {m}_{2} \left(x - {x}_{1}\right)$ $y - 5 = \frac{16}{9} \left(x - \left(- 12\right)\right)$ $y = \frac{16}{9} x + 12 \left(\frac{16}{9}\right) + 5$ $y = \frac{16}{9} x + \frac{79}{3}$ ##### Impact of this question 486 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363149166107178, "perplexity": 2910.084615811213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363336.93/warc/CC-MAIN-20211207045002-20211207075002-00419.warc.gz"}
http://www.inpwafer.com/2019/07/room-temperature-annealing-effects-on.html
## 2019年7月17日星期三 ### Room-Temperature Annealing Effects on Radiation-Induced Defects in InP Crystals and Solar Cells Remarkable defect annealing in both p-type and n-type InP following 1-MeV electron irradiation has been observed at room temperature, resulting in the recovery of InP solar cell properties. The room-temperature annealing characteristics of radiation-induced defects in InP were studied by measuring InP solar cell photovoltaic properties in conjunction with deep-level transient spectroscopy. The recovery of InP solar cell radiation damage is found to be due mainly to the room-temperature annihilation of radiation-induced recombination centers such as an H4 trap (Ev+0.37 eV) in p-InP. Moreover, the room-temperature annealing rate of radiation-induced defects in InP was found to be proportional to the 2/3 power of the carrier concentration. Additionally, a model has been considered in which point defects diffuse to sinks through impurities so as to annihilate and bind with impurities, thus forming point defect-impurity complexes. Source:IOPscience For more information, please visit our website: send us email at [email protected] and [email protected]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8787465691566467, "perplexity": 3038.7461521721693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00031.warc.gz"}
https://www.bartleby.com/questions-and-answers/problems-9.-given-that-x-x2-and-1x-are-solutions-of-the-homogeneous-equation-corresponding-to-in-cac/8b27c66b-3b35-4e29-9b25-e3627ea6638c
Problems9. Given that x, x2, and 1/x are solutions of the homogeneousequation corresponding toIn cach of Problems 1 through 4, use the method of variation ofparameters to determine the general solution of the given differentialequation.xy"+x?y"-2xy +2y 2x, x> 0,1. y +y tan t,determine a particular solution.10. Find a formula involving integrals for a particular solution of thedifferential equation2. y"-y' t3. y-2y"-y'+2y e4. y-y"+y'-y e sinty"-y"+y-y= g(1).In each of Problems 5 and 6, find the general solution of the givendifferential equation. Leave your answer in terms of one or moreintegrals.11. Find a formula involving integrals for a particular solution of thdifferential equationes:ReductiUndetery4)- y= g(1).5. y-y+y-y-secr, - Question 4 views How might I be able to answer problem 4? This problem is from a Differential Equations Textbook. The section is titled, "The Method of Variation of Parameters." check_circle Step 1 In the method of variation of method, the complete solution is calculated just by the CF, NO PI is needed. Step 2 For the differential equation, y’’’y’’+y’-y=e-tsint the characteristic equation is, Step 3 For m=1,i, -i the yc(t)... Want to see the full answer? See Solution Want to see this answer and more? Solutions are written by subject experts who are available 24/7. Questions are typically answered within 1 hour.* See Solution *Response times may vary by subject and question. Tagged in
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.827796220779419, "perplexity": 3209.63683825292}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00314.warc.gz"}
https://math.stackexchange.com/questions/1222268/elliptic-curve-group-and-multiplicative-inverse-of-an-element
# Elliptic Curve Group and Multiplicative Inverse of an element. Suppose $E$ be an Elliptic Curve over a field $F_q$ and $q=p^n$ where $p=$ prime. We know that the Elliptic Curve group $E(F_q)$ under addition is an Abelian/Commutative Group of order, $\#E(F_q)=q+1-t$ where $|t|\leq2\sqrt{q}$, and the structure of this group is either cyclic or almost cyclic. Since the point addition and doubling formulas need multiplicative inverse to be computed so how can it be proven that all the elements in $E(F_q)$ have multiplicative inverse? • It sounds like you are confusing two groups. The formulas for addition/doubling are about the group of the elliptic curve. The required inverses are in the multiplicative group of non-zero elements of $F_q$. Those inverses exist by virtue of $F_q$ being a field. – Jyrki Lahtonen Apr 6 '15 at 12:29 • But if I am not wrong the point addition and doubling require the elements of the elliptic curve group and generate a point in that group. So it is all about the elements of the elliptic curve group. – user110219 Apr 6 '15 at 12:34 • Yes, the points you are adding are on the elliptic curve. But the formulas use the coordinates of those points. And the coordinates are elements of the finite field. – Jyrki Lahtonen Apr 6 '15 at 12:42 • Oh God, I mixed the point with coordinate. Thank you and now it is clear to me. – user110219 Apr 6 '15 at 13:02 • No problem. Glad to hear that the problem was resolved. – Jyrki Lahtonen Apr 7 '15 at 10:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212596774101257, "perplexity": 265.09343887727823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313501.0/warc/CC-MAIN-20190817222907-20190818004907-00486.warc.gz"}
http://mathoverflow.net/questions/59207/fixing-a-mistake-in-an-introduction-to-invariants-and-moduli/59255
# Fixing a mistake in “An introduction to invariants and moduli” On page 13 of the book "An introduction to invariants and moduli" of Mukai http://catdir.loc.gov/catdir/samples/cam033/2002023422.pdf there is a mistake, in the end of the proof of Proposition 1.9. It seems to me that this proof can not be fixed, without using the notion of Noetherian rings and Hilbert basis theorem. The question is: Can this proof be fixed, without using commutative algebra -- i.e., by the elementary reasoning that Mukai is using there? I reproduce here the proof from the book for completeness. $S$ is the ring of polynomials, $G$ a group, $S^G$ is the ring of invariants Proposition. If $S^G$ is generated by homogeneous polynomials $f_1,...,f_r$ of degrees $d_1,...,d_r$, then the Hilbert series of $S^G$ is the power series expansion at $t=0$ of a rational function $$P(t)=\frac{F(t)}{(1-t^{d_1})...(1-t^{d_r})}$$ for some $F(t)\in \mathbb Z[t]$. Proof. We use induction on $r$, observing that when $r=1$, the ring $S^G$ is just $\mathbb C[f_1]$ with the Hilbert series $$P(t)=1+t^{d_1}+t^{2d_1}+...=\frac{1}{1-t^{d_1}}.$$ For $r>1$ consider the injective complex linear map $S^G\to S^G$ defined by $h\to f_rh$. Denote the image by $R\subset S^G$ and consider the Hilbert series for the graded rings $R$ and $S^G/R$. Since $R$ and $S^G/R$ are generated by homogeneous elements, we have $$P_{S^G}(t)=P_{R}(t)+P_{S^G/R}(t).$$ On the other hand, $dim(S^G\cap S_d)=dim(R\cap S_{d+d_r})$, so that $P_R(t)=t^{d_r}P_{S^G}(t)$, and hence $$P_{S^G}(t)=\frac{P_{S^G/R}(t)}{1-t^{d_r}}.$$ But $S^G/R$ is isomorphic to the subring of $S$ generated by the polynomials $f_1,...,f_{r-1}$, and hence by the induction hypothesis $P_{S^G/R}(t)=F(t)/(1-t^{d_1})...(1-t^{d_{r-1}})$ for some $F(t)\in \mathbb Z[t]$... Mistake: It is not true that $S^G/R$ is isomorphic to the subring of $S$ generated by polynomials $f_1,...,f_{r-1}$. For example consider $\mathbb C^2$ with action $(x,y)\to (-x,-y)$. Then let $f_1=x^2$, $f_2=y^2$, $f_3=xy$. Motiviation of this question. Of course this proposition is a partial case of Hilbert-Serre theorem, proven for example at the end of Atiyah-Macdonald. But the point of the introduction in the above book is that one does not use any result of commutative algebra. - Attiyah-McDonalds is a very funny typo. – Rasmus Bentmann Mar 22 '11 at 22:11 Thanks! That maid me laugh too! But I'll change it nevertheless :) – aglearner Mar 22 '11 at 22:36 @aglearner: Maconald is less funny, but still a typo! – Hailong Dao Mar 23 '11 at 0:22 I've heard many people complain about frequent errors in Mukai's book. – bavajee Mar 23 '11 at 1:15 This is not an answer to the question, I just decided to give for completeness a standard proof of the above statement that uses (a version of) Hilbert-Serre theorem. In this proof we need to use Hilbert basis theorem. In the above statement $S^G$ is clearly a finitely generated graded module over the ring of polynomials $\mathbb C[x_1,...,x_r]$, so it is sufficient to prove: Theorem (Hilbert, Serre). Suppose that $S=\sum ^{\infty}_{j=0}S_j$ is a commutative graded ring with $A_0=\mathbb C$, finitely generated over $\mathbb C$ by homogeneous elements $x_1,...,x_r$ in positive degrees $d_1,...,d_r$. Suppose that $M=\sum_{j=0}^{\infty} M_j$ is a finitely generated graded $S$-module (i.e., we have $S_iS_j\subset S_{i+j}$ and $S_iM_j\subset M_{i+j}$). Then the Hilbert series $P(M,t)$ is of the form $$\sum_{j=0}^{\infty}dim(M_j)t^j=P(M,t)=\frac{F(t)}{\Pi_{j=1}^r(1-t^{d_j})}, \;\; F(t)\in \mathbb Z[t].$$ Proof. We work by induction on $r$. If $r=0$ then $P(M,t)$ is a polynomial with integer coefficients, so suppose $r>0$. Denote by $M'$ and $M''$ the kernel and cokernel of the multiplication by $x_r$, we have an exact sequence for each $j$ $$0\to M'_j\to M_j \to^{x_r}M_{j+d_r}\to M''_{j+d_r}\to 0.$$ Now $M'$ and $M''$ are finitely generated graded modules for $K[x_1,...,x_{r-1}]$, and so by induction their Hilbert series have the given form. From the above exact sequence we have $$t^{d_r}P(M',t)-t^{d_r}P(M,t)+P(M,t)-P(M'',t)=0.$$ Thus $$P(M,t)=\frac{P(M'',t)-t^{d_r}P(M',t)}{1-t^{d_r}}$$ has the given form. Where did we use Hilbert basis theorem? We use it when we say that $M'$ is finitely generated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9784926772117615, "perplexity": 110.7083403532988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827079.61/warc/CC-MAIN-20160723071027-00020-ip-10-185-27-174.ec2.internal.warc.gz"}
http://bicep.rc.fas.harvard.edu/CMB-S4/analysis_logbook/20190509_b3_obseff/
## Observing efficiency reality check ### Time factors only Recently Reijo posted some scan pattern simulations Pole wide, Chile narrow, and Pole narrow. In 20190501_obseff I noted that the sum of detector-seconds in the Pole wide and Chile narrow patterns are 0.88 and 0.72 of the nominal calendar year wall-clock time. This is due to some masking of turn-arounds, re-point time between scansets, and, in the case of Chile, non availability of target fields given sun/moon avoidance at certain times of the year. Let's take a reality check on what efficiency factors look like in the real world for useable scan time per calendar year plus data cut and non-uniform weighting penalties. I take as an example BICEP3 in 2017 which achieved historically good observing efficiency and focal plane yield - see this B3 SPIE paper for some details. It would be interesting to compare to ACTPol who have shown some impressive observing efficiency in 2015 - see fig 4 here. During the BICEP/Keck map accumulation process we keep track of the number of pair-seconds binned into each map pixel. Taking the sum over all pixels we get $$7.37 \times 10^9$$ pair seconds. The nominal number of light pairs in the focal plane is 1200 so the number above should be compared to $$1200 \times 365 \times 86400 = 3.4 \times 10^{10}$$, for a ratio of $$0.195$$. This includes inefficiencies from scanning and data cuts. Each scanset is nominally 50 minutes. From the start of the first scan to the end of the last is 43.4 minutes. After masking out the turnarounds we are left with 33.8 minutes - let's define $$f_\mathrm{scan}=33.8/50=0.676$$. In a 3 day observing run there are 73 scansets - $$(73 \times 50)/(3\times 24 \times 60)= 0.845$$ - however, since not all runs are complete and there may be periods when the telescope is broken we won't use this number. Instead let's take the total time spent in scansets used to build the final map verus the calendar year - $$f_\mathrm{year} = (5021 \times 50)/(365\times 24 \times 60) = 0.478$$. So we expect $$f_\mathrm{year} f_\mathrm{scan} = 0.478 \times 0.676 = 0.323$$ as the fraction of the calendar year getting coadded into the map. Although the nominal number of light detectors is 2400 only 2012 of these have both halves of the pair marked as nominally functional. Of these a fraction 0.72 of scanset/pair data passes data cuts as shown in this cut plot. So the fraction of nominal which passes cuts is $$f_\mathrm{pass} = (2012\times 0.72) / 2400 = 0.604$$. So we expect $$f_\mathrm{year} f_\mathrm{scan} f_\mathrm{pass} = 0.478 \times 0.676 \times 0.604 = 0.195$$ as the fraction of nominal detector time over the calendar year getting coadded into the maps - in close agreement with the integration time map. (The first used scanset was on 2017/03/17 and the last 2017/11/06 which is a 234 day span. For interest we can compare the number of scansets between the first and last days of observing versus the expected number for continuous 3 day runs - $$5021/((234/3)\times 72) = 0.894$$.) In the map accumulation process we apply weights to account for the fact that some pairs are noisier than others. We accumulate a weighted integration time map using these which has a total of $$5.29 \times 10^9$$ effective pair seconds - a further loss of factor $$f_\mathrm{weights} = 5.29 / 7.37 = 0.718$$ for an overall efficiency versus wall clock nominal of $$f_\mathrm{year} f_\mathrm{scan} f_\mathrm{pass} f_\mathrm{weights} = 0.140$$. Hopefully CMB-S4 can do a bit better on some of these factors - but we should not assume it it going to do a whole lot better. ### NET obs versus model calc Reijo also included in those calcs nominal variance maps using the absolute 95GHz NET's from 20190220_S4_NET_forecasts_III along with the variation versus observing elevation given there. All detectors are assumed to work and to deliver the same idealized NET all the time. The BK pipeline produces sign-flip noise realizations. Taking the variance over these we can get $$Q$$ and $$U$$ variance maps. The observing time has been split between these. Noting that: $$Q_\mathrm{var}=\frac{\mathrm{NET}^2}{t_Q} \, \mathrm{and} \, U_\mathrm{var}=\frac{\mathrm{NET}^2}{t_U}$$ we can write $$t_\mathrm{obs} = t_Q + t_U = \frac{\mathrm{NET}^2}{Q_\mathrm{var}} + \frac{\mathrm{NET}^2}{U_\mathrm{var}}$$ and so $$\mathrm{NET}^2 = t_\mathrm{obs} \frac{Q_\mathrm{var}+U_\mathrm{var}}{Q_\mathrm{var}U_\mathrm{var}}$$ Since the observation time map is in pair seconds the NET we get from this is the pair difference NET. We define pair difference as $$(A-B)/2$$ so we need to multiply by $$\sqrt{2}$$ to go from pair-diff to single detector NET. Below is the resulting map of apparent NET - we see the increase to lower elevation due to increased atmospheric opacity. Histograming this we get the below. We can compare this to the 262.8 $$\mu$$K$$\sqrt{\mathrm{s}}$$ which comes from the model calculation for elevation=56 deg at Pole. Since that it actually higher than the lower half of the histogram here we might claim that we are justified in using the 0.195 number above rather than the 0.140.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9042220115661621, "perplexity": 1178.926725600253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401641638.83/warc/CC-MAIN-20200929091913-20200929121913-00175.warc.gz"}
https://judge.u-aizu.ac.jp/onlinejudge/description.jsp?id=1131
Time Limit : sec, Memory Limit : KB # Problem C: Unit Fraction Partition A fraction whose numerator is 1 and whose denominator is a positive integer is called a unit fraction. A representation of a positive rational number p/q as the sum of finitely many unit fractions is called a partition of p/q into unit fractions. For example, 1/2 + 1/6 is a partition of 2/3 into unit fractions. The difference in the order of addition is disregarded. For example, we do not distinguish 1/6 + 1/2 from 1/2 + 1/6. For given four positive integers p, q, a, and n, count the number of partitions of p/q into unit fractions satisfying the following two conditions. • The partition is the sum of at most n many unit fractions. • The product of the denominators of the unit fractions in the partition is less than or equal to a. For example, if (p,q,a,n) = (2,3,120,3), you should report 4 since enumerates all of the valid partitions. ## Input The input is a sequence of at most 1000 data sets followed by a terminator. A data set is a line containing four positive integers p, q, a, and n satisfying p,q <= 800, a <= 12000 and n <= 7. The integers are separated by a space. The terminator is composed of just one line which contains four zeros separated by a space. It is not a part of the input data but a mark for the end of the input. ## Output The output should be composed of lines each of which contains a single integer. No other characters should appear in the output. The output integer corresponding to a data set p, q, a, n should be the number of all partitions of p/q into at most n many unit fractions such that the product of the denominators of the unit fractions is less than or equal to a. 2 3 120 3 2 3 300 3 2 3 299 3 2 3 12 3 2 3 12000 7 54 795 12000 7 2 3 300 1 2 1 200 5 2 4 54 2 0 0 0 0 4 7 6 2 42 1 0 9 3
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9174016714096069, "perplexity": 291.6406763308322}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991693.14/warc/CC-MAIN-20210512004850-20210512034850-00290.warc.gz"}
https://www.physicsforums.com/threads/linearized-einstein-field-equations.639305/
# Linearized Einstein Field Equations • Start date • #1 PLuz 64 0 Hi everyone, Say that one can separate the metric of a space time in a background metric and a small perturbation such that $g_{\alpha \beta}=g'_{\alpha \beta}+h_{\alpha \beta}$, where $g'_{\alpha \beta}$ is the background metric and $h_{\alpha \beta}$ the perturbation. Computing the christoffel symbols one would get, to first order in the perturbation: $$\Gamma^\alpha_{\beta \gamma}=\Gamma'^\alpha_{\beta \gamma}+\frac{1}{2}(h^{\alpha}_{\beta,\gamma}+h^{ \alpha }_{\gamma,\beta}-h_{\beta \gamma}\hspace{.2mm}^{,\alpha}),$$ right? Then why, in http://relativity.livingreviews.org/Articles/lrr-2011-7/fulltext.html reference, in the text right after Eq.19.23, $C^\alpha_{\beta \gamma}=\frac{1}{2}(h^{\alpha}_{\beta;\gamma}+h^{ \alpha }_{\gamma;\beta}-h_{\beta \gamma}\hspace{.2mm}^{;\alpha})$, is written with covariant derivatives? Thank you Last edited: • #2 2,640 1,107 A difference between two connections is a tensor, which can be checked by explicitly writing down the transformation of this difference. Hence you'll need covariant derivatives, not partial derivatives. Of course, these covariant derivatives should follow from your definition of the connection and your C. So that is something which you should do first. Second, you should be very careful with lowering and raising indices underneath partial derivatives. • #3 PLuz 64 0 Yes, you're absolutely right, at both things. I didn't care about the partial derivative when I raised my indexes and indeed I was being naive in the definition of C. Thank you very much you were a life (brain) saver! • Last Post Replies 10 Views 350 • Last Post Replies 57 Views 1K • Last Post Replies 10 Views 849 • Last Post Replies 1 Views 509 • Last Post Replies 186 Views 4K • Last Post Replies 3 Views 418 • Last Post Replies 52 Views 970 • Last Post Replies 13 Views 539 • Last Post Replies 4 Views 920 • Last Post Replies 16 Views 930
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336493968963623, "perplexity": 1362.8898558519752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00115.warc.gz"}
http://mathhelpforum.com/calculus/145368-differentiable-function.html
Math Help - differentiable function 1. differentiable function suppose that f is a differentiable function with the derivative f'(x)=(x-1)(x+2)(x+3). determine the values of x for which the function f is increasing and the values for x for which the function is decreasing 2. Should just be a simple case of determining f'(x) near 1,-2 and -3. I imagine there's an easier or probably a more rigorous approach but just test a value in each of the intervals $(-\infty, -3)$, $(-3,-2)$, $(-2,1)$, $(1,\infty)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.890873908996582, "perplexity": 221.26637189520383}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008919.73/warc/CC-MAIN-20141125155648-00160-ip-10-235-23-156.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/99572/the-constructions-of-davis-and-januszkiewicz/124221
# The Constructions of Davis and Januszkiewicz One of the most useful tools in the study of convex polytope is to move from polytopes (through their fans) to toric varieties and see how properties of the associated toric variety reflects back on the combinatorics of the polytopes. This construction requires that the polytope is rational which is a real restriction when the polytope is general (neither simple nor simplicial). Often we would like to consider general polytopes and even polyhedral spheres (and more general objects) where the toric variety construction does not work. I am aware of very general constructions by M. Davis, and T. Januszkiewicz, (one relevant paper might be Convex polytopes, Coxeter orbifolds and torus actions, Duke Math. J. 62 (1991) and several subsequent papers). Perhaps these constructions allow you to start with arbitrary polyhedral spheres and perhaps even in greater generality. I ask about an explanation of the scope of these constructions, and, in simple terms as possible, how does the construction go? - Gil, there is our construction with Kollar, for Euclidean and hyperbolic polyhedral complexes (it will work for spherical complexes as well, we just did not find any use for it), see arxiv.org/abs/1109.4047 and arxiv.org/abs/1201.3129. Unfortunately, at least for now, its main application is from polyhedral side to algebro-geometric side, not in the reverse direction that you are interested in. The construction almost inevitably produces singular projective varieties, however, and this is its main point, one can control the singularities. – Misha Jun 14 '12 at 12:29 The DJ construction works with a simplicial complex $K$ and a subtorus $W\leq\prod_{v\in V}S^1$ (where $V$ is the set of vertices of $K$). People tend to be interested in the case where $|K|$ is homeomorphic to a sphere, but that isn't really central to the theory. However, it is important that we have a simplicial complex rather than something with more general polyhedral structure. It is also important that we have a subtorus, which gives a sublattice $\pi_1(W)\leq\prod_{v\in V}\mathbb{Z}$, which is integral/rational information. I don't think that the DJ approach will help you get away from the rational case. I like to formulate the construction this way. Suppose we have a set $X$ and a subset $Y$. Given a point $x\in\prod_{v\in V}X$, we put $\text{supp}(x)=\{v:x_v\not\in Y\}$ and $K.(X,Y)=\{x:\text{supp}(x) \text{ is a simplex}\}$. The space $K.(D^2,S^1)$ is a kind of moment-angle complex, and $K.(D^2,S^1)/W$ is the space considered by Davis and Januskiewicz; it has an action of the torus $T=\left(\prod_{v\in V}S^1\right)/W$. Generally we assume that $W$ acts freely on $K.(D^2,S^1)$. There is a fairly obvious complexification map $K.(D^2,S^1)/W\to K.(\mathbb{C},\mathbb{C}^\times)/W_{\mathbb{C}}$. Under certain conditions relating the position of $W$ to the simplices of $K$, one can check that $K$ gives rise to a fan, that the complexification map is a homeomorphism, and that both $K.(D^2,S^1)/W$ and $K.(\mathbb{C},\mathbb{C}^\times)/W_{\mathbb{C}}$ can be identified with the toric variety associated to that fan. - Thanks, Neil! Are there extensions of this construction to more general type of complexes? – Gil Kalai Jun 14 '12 at 11:01 Dear Gil, in addition to Dan's answer let me mention that the construction of toric varieties à la Cox has been generalized to arbitrary convex polytopes in Geometric spaces from arbitrary convex polytope, Int. J. Math., 23, (2012) (the simple case had been treated in a previous paper joint with E. Prato). - Many thanks, Fiammetta! I wonder if we can do something for polyhedral spheres which are neither polytopes or simplicial or simple. Even polhedral 3-spheres... – Gil Kalai Mar 24 '13 at 16:06 Dear Gil, we have a nonrational construction with F. Battaglia in the case of simplicial fans here: arXiv:1108.1637, where we use foliated compact manifolds instead of toric varieties. In this setting, Stanley's proof of (the necessary part of) the g-conjecture carries over. - Many thanks, Dan! – Gil Kalai Mar 12 '13 at 15:14 The construction of Davis and Januskiewicz can be realized as an equivariant colimit. Let $P$ be a simple polytope of dimension $n$ and let $G$ be either the mod 2 torus ${\mathbb{Z}}_2^n$ or the usual torus ${\mathbb{T}}^n$. A characteristic function on $P$ corresponds to a order-reserving map $\chi:{\mathrm{Face}} \, P\to {\mathrm{Sub}}_{\mathbf{Grp}} G$ from the face poset of $P$ to the poset of subgroups of $G$ such that 1. The image of $\chi$ lands in the unimodular subgroups of G. 2. $\chi$ is graded, in the sense that ${\mathrm{codim}} \,F = {\mathrm{rank}} \, \chi F$. There is a functor $-\times G/-:{\mathrm{Face}} P\times {\mathrm{Sub}}_{\mathbf{Grp}} G\to {\mathbf{Top}}G$ from the poset product to the category of $G$-spaces that carries $(F,H)$ to the $G$-space $F\times G/H$. Here $G$ acts on the second factor of the product naturally: $$(x,Hg)g':=(x,Hgg')$$ Pick out a certain subposet $Q$ of ${\mathrm{Face}} P\times {\mathrm{Sub}}_{\mathbf{Grp}} G$ by requiring that $(F,G/H)$ is in $Q$ if and only if $H$ is a unimodular subgroup of $\chi F$. The real or complex quasitoric manifold $M(\chi)$ over $P$ with characteristic function $\chi$ is the colimit of the composite $$Q\hookrightarrow {\mathrm{Face}} P\times {\mathrm{Sub}}_{\mathbf{Grp}} G\xrightarrow{-\times G/-} {\mathbf{Top}}_G$$ Without reference to morphisms and wrting $H\prec \chi F$ to mean that $H$ is a unimodular subgroup of $\chi F$, we can write $$M(\chi)={\mathrm{colim}}_{H \prec \chi F} F\times G/H$$ Replacing the face poset by a general poset and taking a general topological group $G$ (or Lie group), one can construct $G$-spaces via such an equivariant decomposition. The defect of such a generalization is that it is unclear whether the resulting $G$-space has a manifold, variety or Kahler structure. Gil, does a polyhedral sphere have a naturally associated poset over which we could construct $G$-spaces with interesting combinatorial invariants? - Many thanks Colin! (I don't know the answer to your question.) – Gil Kalai Nov 12 '13 at 18:32 Is there a natural poset associated to a polyhedral spheres? – user2529 Nov 13 '13 at 15:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9074339270591736, "perplexity": 281.44689122589295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125857.44/warc/CC-MAIN-20160428161525-00029-ip-10-239-7-51.ec2.internal.warc.gz"}
http://en.wordpress.com/tag/eigenvalues/
## Tags » Eigenvalues #### Extracting Eigenvalues from a Linear Regression in R The function below extracts the eigenvalues and an eigenvector from a linear regression model. I wrote this function so that the eigenvalues and vectors are the same as those reported by the SAS PROC REG procedure when using the COLLIN option. 286 more words Math &amp; Stats Fun #### Random matrices have simple spectrum Van Vu and I have just uploaded to the arXiv our paper “Random matrices have simple eigenvalues“. Recall that an Hermitian matrix is said to have simple eigenvalues if all of its eigenvalues are distinct. 616 more words Math.CO
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8330431580543518, "perplexity": 1080.6301067834625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858583.14/warc/CC-MAIN-20150124161058-00216-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?t=243871
# Basic Projectile: soccer ball kick by zileas Tags: ball, basic, kick, projectile, soccer P: 6 1. The problem statement, all variables and given/known data A soccer ball is kicked with an inital speed of 12.0 m/s in a direction of 25.0 degrees above the horizontal. Find the magnitude and direction of its velocity A) 0.250 s and b) 0.500 s after being kicked. C) Is the ball at its greatest height before or after 0.500 s? 2. Relevant equations a = (Vf - Vi)/t d = ((Vi + Vff / 2)t d = Vit + 0.5at[super]2[/super] Vf = 2ad + Vi[super]2[/super] 3. The attempt at a solution I'm really stuck on this. I've gone with: Vh = cos(25)12.0m/s Vh = 10.88 m/s Vi = sin(25)12.0m/s Vi = 5.07 m/s Now I've heard that in class if your initial vertical velocity in these types of problems has to be equal to the final or finishing vertical velocity... So since I have Vi = 5.07 m/s ; I set Vf = -5.07 m/s ...and worked from there, but I am not sure at all if this was correct. From there I tried to work out the displacement ... Vf = 2ad + Vi[super]2[/super] -5.07m/s = 2(-9.8m/s2)(d) + 5.07[super]2[/super] d = 1.57 m ... and this is where I KNEW (at least I think I know...) that I'm horribly wrong. A kick with an initial vertical velocity of 5.07 m/s and overall 12.0 m/s ... and my displacement is 1.57 m total... If anybody can help get me pointed in the right direction on this ... I would GREATLY appreciate it. P: 677 $$v_x = v_0_x +a_x \cdot t = v_0_x =12 \cos(25^{o})$$ $$v_y= v_0_y + a_y \cdot t =12sin(25^{o})-9.81 \cdot t$$ $$v=\sqrt{v_x^2+v_y^2}$$ P: 6 Quote by dirk_mec1 $$v_x = v_0_x +a_x \cdot t = v_0_x =12 \cos(25^{o})$$ $$v_y= v_0_y + a_y \cdot t =12sin(25^{o})-9.81 \cdot t$$ $$v=\sqrt{v_x^2+v_y^2}$$ I don't understand how these help me... isn't there still 2 unknowns with a and t ? I'm so confused. P: 159 Basic Projectile: soccer ball kick Quote by zileas I don't understand how these help me... isn't there still 2 unknowns with a and t ? I'm so confused. those equations tell you the final answer... that equation for the v he gave you was the final magnitude of the velocity. The a is not unknown because it is 9.8m/s^2 (acceleration of gravity in Y direction) the T is unknown because you have to plug in the values of t you want to solve for... Vx= Vox*cos(25) Vy= Voy*sin(25)-(9.8m/s^2)(t) time doesnt have to be factored into the Vx because that speed is at a constant speed in the x direction until it hits the ground because there are no forces acting on it (assuming no air friction). For Vy does need time factored into it because gravity is acting on it. So plug in the time values for t that you need answers for and use the equation given by dirk to find the magnitude of the velocity... V= sqrt(Vx^2+Vy^2) Related Discussions Introductory Physics Homework 9 General Physics 7 Introductory Physics Homework 4 Special & General Relativity 2 General Physics 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.89720219373703, "perplexity": 824.0442583489458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823634.2/warc/CC-MAIN-20140820021343-00177-ip-10-180-136-8.ec2.internal.warc.gz"}
https://douglasrumbaugh.com/post/imperial-defense/
# In Defence of Imperial Units One thing that I have noticed over years spent on the Internet is that there is a seemingly large, and very vocal, group of people who spend a lot of time and energy getting worked up about measurement systems. And, almost universally, their distaste is targeted in one direction: imperial units. Metric good, imperial bad, is the general refrain. I recently spent a bored evening paging through several anti-imperial/pro-metric videos on YouTube and, in spite seeing a lot of a lot of emotions and non-sequitur arguments, I didn’t find much in the way of actually valid and meaningful argumentation. So I wanted to take a few minutes and write out responses to some of the common arguments (or, in many cases, non-arguments) presented in these videos, from the perspective of someone who, horror of horrors, is rather fond of the imperial system. # Disambiguating Terminology First things first, we need to establish what we mean by “imperial” and also by “metric” units. The truth of the matter is that these terms mean different things in different contexts, and in different time periods, and it is important to be precise with our language. For example, one interpretation of “imperial” might be the English Engineering system with its truly awful pound-mass (lbm) unit for measuring mass, necessitating an extra conversion factor within equations because its definition is inconsistent with the rest of the system. But that is, of course, not a system I’m going to be defending. For the purposes of this post, when I use the phrase “imperial system”, “imperial units”, etc., I am actually talking about the more modern British Gravitational System, with its base units of foot-slug-second. And when I talk about metric, I will be referring to the meter-kilogram-second system, also known as SI units. Neither of those systems have independent volume measurement, using cubic feet and cubic meters as the base volume unit respectively, so I’ll also talk about liters and gallons, and their associated units. # Common Anti-imperial Arguments Now that we have defined the terms that we are talking about, let’s take a look at some common arguments that I’ve seen crop up within these videos. I’ll do my best to represent them accurately and in good faith. We’ll start with the most obvious one. ## Metric Unit Conversions are Easier One very common argument used in favor of the metric system and against the imperial system is that unit conversions within the system are easier. Metric conversions are done in a fully standardized system, only needing a shifting of the decimal point. Whereas imperial conversions are much more convoluted. A classic example is that $1 \text{ km} = 1000 \text{ m}$, whereas $1 \text{ mi} = 5280 \text{ ft}$. This is a point that is absolutely true. Performing a conversion from centimeters to kilometers is far easier than from inches to furlongs. You can’t really argue otherwise. However, I would argue that this point is pretty much irrelevant for most practical situations. The fact of the matter is that unit conversions fulfill very different purposes in metric and imperial, and so to compare the relative difficulty of conversion between two metric units, and two imperial units, isn’t particularly valid. Metric unit conversions are used primarily to condense the representation of numbers, and imperial unit conversions are mainly used to ease measurement or calculations. In the case of metric, a unit conversion is performed by shifting the decimal point around, which changes a prefix on the unit in question. For example, there are 100 centimeters in 1 meter. So 10 meters is equal to 10,000 centimeters. This is a very straightforward and easy process–but, at least in my opinion, it’s largely vacuous. The reason why I say this is that, effectively, all that this scaling up-and-down of the units accomplishes is reducing the number of insignificant digits that you need to represent with a 0. If you have 1200 meters, you can avoid writing two of those zeroes by instead writing 1.2 kilometers. But, in practice, that’s all that you accomplish. This is made even less significant by how these unit conversions are actually used in practice. Without going too deeply into the weeds of dimensional analysis, the equations used in science and engineering only work out cleanly when the units associated with all of the physical quantities used as inputs to the equation align. As a simple example, in Newton’s 2nd Law of Motion, $$\vec{F} = m\vec{a}$$ The force ($\vec{F}$) is conventionally expressed in Newtons. But this only works if mass is in kilograms, and acceleration is in meters per square second. If you were to use a mass in grams, the result would only be in Newtons if you also converted the acceleration to be in millimeters per square second. All of the quantities need to be scaled up or down the same amount–otherwise you’ll get a result in a non-standard set of units for force. This is actually why the CGS system defines its own unit of force, the dyne, for using centimeters and grams together in this equation, and others. In practice, what this means is that, when working in the SI system to do science or engineering, you rarely use kilometers or milligrams or centimeters. It causes too much of a hassle making sure that all of your units are consistent with each other. Most people simply convert everything into meters and kilograms, and then go from there. And then it is far simpler just to leave the resulting quantities in the base units as well, because scientific notation handles perfectly well the compression of insignificant zeroes, without requiring the next person to need to undo your conversions before using the result in another calculation. Likewise, it is very rare in real life to convert from feet or yards into miles when using the imperial system. It isn’t as though people actually use these as subdivisions of one another in most cases. You’d never see someone report a distance as “5 miles, 1571 yards, 2 feet, and 9 inches”. You will see people use feet and inches alone like this (and sometimes yards), but never will miles be thrown into the mix. And the conversions for inches, feet, and yards are are easier than the strawman example of feet and miles that is commonly bandied about. They’re still more complex than simply shifting the decimal, like in metric, but for the additional complexity, these conversions actually aid in performing calculations with these units. For example, if you wanted to subdivide a foot into 3 equal parts, it’s much nicer to think in terms of 4 inches than in terms of .33333333333333333333… feet. This is a topic I’ll discuss in more detail in a later post on the advantages of the imperial system. So yes: it is true that unit conversions are far easier in metric than they are in imperial. But the commonly provided examples of difficult imperial conversions are, in practice, almost never used, and the commonly used conversions in imperial are relatively simple (though, admittedly, not as simple as metric). ## The Imperial System is a Mess I’m not going to write at length on this one because it seems to me to be largely a non-sequitur. You’ll frequently see this image bandied about as though it proves something, This image shows the relations between many different units of length used in the imperial system, with their conversion factors. And it’s true, as rendered in that figure the system seems very complex compared to metric, which doesn’t even warrant a diagram as the only unit of length relevant there is the meter. The trouble is that this image is also incredibly misleading. The imperial system is a lot older than the metric system, and has picked up a lot of additional baggage of the years. Most of the units shown on this diagram are either (a) not in use any more, or (b) used in very specific contexts. Unless you’re doing surveying work or typography, it is likely that the only units that really matter on this chart are inches, feet, yards, and miles. That said, there are examples of imperial units where this critique is valid. One commonly noted example is the definition of a “barrel”, of which there are several versions used in different contexts. Similarly there is the short and the long ton. But, again, these distinctions arise in very specialized areas. And the barrel situation can be fixed via standardization within the imperial system, without a conversion to metric. Similar odd units crop up in metric too, like shakes, angstroms, light-years, light-nanoseconds, parsecs, electron-volts, foes, ergs, etc. One might argue that these are obscure, or relevant to very particular scientific domains. To which I would reply that the same can be said of links, cubits, and skeins. ## The Mars Climate Orbiter, Air Canada Fueling Incident, and other Such Examples Another common argument used against imperial are accidents that were caused due to either misconversions, or miscommunications in areas where both metric and imperial units were used concurrently. These are not arguments specifically against imperial. The accidents that occurred were not due to the imperial units themselves, but rather due to environments in which two unit systems were used together. These examples demonstrate that using only a single unit system at a time in a given context is a good idea, but don’t provide any specific condemnation of imperial. The same could have happened at the interface between two metric systems (Like SI and CGS). ## Scientific Literacy One particularly interesting argument that I saw was brought up by Kurtis Baute at around 4:30 in this video. He argues that science is fundamentally about taking measurements–which is absolutely true, and that being able to accurately take a measurement is important to science and scientific literacy, which is also very true. He then goes on to say that the use of imperial units prevents people from knowing “what a meter is” and implies that this prevents them from being able to do science. Okay, so to be fair I did pick a pretty over-the-top variation of this argument to lay out. Generally speaking, science is done using metric, and so in order to participate in science, one must learn metric. This is true. However, I don’t think that the use of imperial units within the US significantly hurts metric literacy. For one thing, metric is taught in schools in the US. I certainly learned it, and I’m sure most other Americans did as well. In fact, for a few years I managed a general physical sciences course at a university that I worked at–the sort of base-level exposure course that every first-year student was required to take–and I was consistently surprised by how almost every student was much more comfortable with metric than with imperial. I think that it has to do with the fact that the imperial system relies heavily on fractions, rather than simple decimal shifting, which seems to be a trouble-spot for a lot of students. As a side note here, I sometimes feel that the metricization of science in high school goes perhaps a little too far. It is not uncommon when judging high school science fairs to see very strange quantities in either experimental procedures, or in results sections. For example, one might see a methods sections including “56.7 grams of baking soda were added to 236.59 millimeters of water” as one of the steps. Why such strange numbers? Because the student in question was doing an experiment in their kitchen, and measured out 2 oz. of baking soda to add to 1 cup of water. But because “SCIENCE MUST BE METRIC!!!!”, instead of reporting the precise measurements that they took on the instruments that they had available, they lose precision and do silly and unnecessary conversions for no good reason. To accurately report the measurements that they took in the form that they took them would result in losing points in judging, because metric is the only acceptable unit system in science. ## Convenient “rough” Estimates between Dimensions Another advantage of the metric system is the convenient conversion between the standard SI volume units, and the non-standard but commonly used volume unit of liters. Specifically, one cubic centimeter is equal to one milliliter. And further, 1 milliliter is roughly equal to 1 gram of water. Yes–this is a rough and somewhat problematic conversion, but it gets brought up a lot. Taking an example from this video, then, can we easily calculate how much water we can haul in our truck, with a max payload of 1,000 kilograms, $$1000 \text{ kg} * \frac{1 \text{ L}}{1 \text{ kg}} = 1000 \text{ L}$$ What he doesn’t mention is that a similar conversion exists in imperial, one pint of water weighs about 1 pound. It isn’t quite as precise as the metric case, but is close enough for practical use (it’s about 4% off). So to do the same problem in imperial units, (using 1 ton instead of 1000 kilograms) $$2000 \text{ lbs} * \frac{1 \text{ pt}}{1 \text{ lb}} = 2000 \text{ pt}$$ Now if we wanted the result in gallons, we’d need to divide by 8, which I admit isn’t quite as convenient as shifting a decimal place. But it still isn’t too bad: 250 gallons. The correct answer is 260–which is slightly under 4% higher than the estimate. Not too bad for a stupid, horrible unit system. In either case–this is very much in the “rough” calculation territory in precise situations, because the density of water is actually a function of temperature and pressure, and so such calculations will always have a margin of error unless these other variables are also accounted for. At which point a thermodynamic table must be consulted, and any illusion of “speedy calculation” goes out the window in both cases. ## Metric Units are Fundamental These days, the metric units have all been defined in terms of universal constants. Thus, these units are fixed to the fundamental nature of the universe in some way. Of course, these definitions are retroactive. The units had already existed, and were simply tied back to some fundamental constant with a random conversion factor applied to make sure that the defined value matched up with the original one. And, of course, imperial units are defined in terms of metric ones now, and so have just as strong an argument to “fundamentality” as the metric ones do. If you’re going to define a meter with the constant $\frac{1}{299792458}$ relative to a fundamental constant, and a foot with the constant $\frac{1}{913767411984}$ relative to that same fundamental constant, can you really claim one is “more fundamental” than the other? Those are both pretty ugly numbers. ## The Imperial System uses Pound for both Mass and Weight It used to. It doesn’t anymore. The British Gravitational System does away with pound-mass and has the slug as a unit of mass instead. No ugly conversions needed. However, the metric system, in general use, uses the kilogram for both mass and weight. If anything, the imperial system is more accurate to reality here, measuring weight with a unit of force. # Conclusion After all this, I do want to say that I do like metric units and use them quite regularly. But I also like imperial units, and use them regularly too. I’ll concede that there are definitely situations where metric units are better than imperial (drill, tap, and screw sizing come to mind…). But I get consistently annoyed by the metric elitists bandying about bad arguments for their good system, and so I wanted to address some of those arguments here. Imperial units are not automatically dumb, imprecise, or non-scientific. And you can do perfectly good work no matter which unit system you choose. In a future post, I will address some of the actual advantages that imperial units have over metric. Spoiler alert–there are a lot of reasons why 10 is a terrible base for your system of measurement (or counting for that matter, but that’s another story entirely).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426031470298767, "perplexity": 739.4213581431743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00336.warc.gz"}
https://www.hepdata.net/record/ins1609448
• Browse all Measurement of detector-corrected observables sensitive to the anomalous production of events with jets and large missing transverse momentum in $pp$ collisions at $\mathbf {\sqrt{s}=13}$ TeV using the ATLAS detector The collaboration Eur.Phys.J.C 77 (2017) 765, 2017. Abstract (data abstract) CERN-LHC. Observables sensitive to the anomalous production of events containing hadronic jets and missing momentum in the plane transverse to the proton beams at the Large Hadron Collider are presented. The observables are defined as a ratio of cross sections, for events containing jets and large missing transverse momentum to events containing jets and a pair of charged leptons from the decay of a $Z/\gamma^\ast$ boson. This definition minimises experimental and theoretical systematic uncertainties in the measurements. This ratio is measured differentially with respect to a number of kinematic properties of the hadronic system in two phase-space regions; one inclusive single-jet region and one region sensitive to vector-boson-fusion topologies. The data are found to be in agreement with the Standard Model predictions and used to constrain a variety of theoretical models for dark-matter production, including simplified models, effective field theory models, and invisible decays of the Higgs boson. The measurements use 3.2 fb${}^{-1}$ of proton-proton collision data recorded by the ATLAS experiment at a centre-of-mass energy of 13 TeV and are fully corrected for detector effects, meaning that the data can be used to constrain new-physics models beyond those shown in this paper. Numerator and denominator ($\geq 1$ jet): - $p_\text{T}^\text{miss} > 200$ GeV - no additional electron or muon with $p_\text{T}$(lepton)>7 GeV and |$\eta$(lepton)|<2.5 - |$y$(jet)|<4.5 and $p_\text{T}$(jet)>25 GeV - $\Delta\phi$(jet,$p_\text{T}^\text{miss}$) > 0.4 for the four leading jets with $p_\text{T}$(jet)>30 GeV - leading $p_\text{T}$(jet)>120 GeV and |$\eta$(jet)|<2.4 Numerator and denominator (VBF): - $p_\text{T}^\text{miss} > 200$ GeV - no additional electron or muon with $p_\text{T}$(lepton)>7 GeV and |$\eta$(lepton)|<2.5 - |$y$(jet)|<4.5 and $p_\text{T}$(jet)>25 GeV - $\Delta\phi$(jet,$p_\text{T}^\text{miss}$) > 0.4 for the four leading jets with $p_\text{T}$(jet)>30 GeV - leading pT(jet)>80 GeV and subleading pT(jet)>50 GeV - $m_\text{jj}$>200 GeV - no additional jets with $p_\text{T}$(jet)>25 GeV inside rapidity interval Denominator only ($\geq 1$ jet and VBF): - leading $p_\text{T}$(lepton)>80 GeV and |$\eta$(lepton)|<2.5 - subleading $p_\text{T}$(lepton)>7 GeV and |$\eta$(lepton)|<2.5 - 66 GeV $< m_{\ell\ell}<$ 116 GeV - $\Delta R$(jet,lepton)>0.5, otherwise jet is removed • #### Table 1 Data from F4 10.17182/hepdata.78366.v2/t1 Measured and expected $R^\text{miss}$ as a function of $p_\text{T}^\text{miss}$ in the $\geq 1$ jet phase space. The fiducial SM predictions... • #### Table 2 Data from F4 10.17182/hepdata.78366.v2/t2 Measured and expected $R^\text{miss}$ as a function of $p_\text{T}^\text{miss}$ in the VBF phase space. The fiducial SM predictions for the... • #### Table 3 Data from F4 10.17182/hepdata.78366.v2/t3 Measured and expected $R^\text{miss}$ as a function of $M_\text{jj}$ in the VBF phase space. The fiducial SM predictions for the... • #### Table 4 Data from F4 10.17182/hepdata.78366.v2/t4 Measured and expected $R^\text{miss}$ as a function of $\Delta\phi_\text{jj}$ in the VBF phase space. The fiducial SM predictions for the... • #### Table 5 Data from AUX 10.17182/hepdata.78366.v2/t5 Statistical-only correlation matrix for all four measured distributions. Bins labelled 1-7 correspond to the $p_\text{T}^\text{miss}$ distribution in the... • #### Table 6 Data from AUX 10.17182/hepdata.78366.v2/t6 Statistical-only covariance matrix for all four measured distributions. Bins labelled 1-7 correspond to the $p_\text{T}^\text{miss}$ distribution in the... • #### Table 7 Data from AUX 10.17182/hepdata.78366.v2/t7 Systematic covariance matrix for all four measured distributions. Bins labelled 1-7 correspond to the $p_\text{T}^\text{miss}$ distribution in the... Version 2 modifications: Predicted SM values in distributions are now divided by bin width, as stated in the table headline. All measured results are unchanged.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9775800704956055, "perplexity": 3538.805782034793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056392.79/warc/CC-MAIN-20210918093220-20210918123220-00496.warc.gz"}
https://prateekkumar.in/slides/monotonicity-testing-lower-bounds-via-communication-complexity/
# Monotonicity Testing Lower Bounds via Communication Complexity Prateek Kumar Nov 9, 2018 CS5030: Communication Complexity Class Presentation ## Property Testing Let $D$, $R$ be two sets, then preperty $\mathcal{P}$ is a set of functions from $D$ to $R$. Examples: Linearity, Monotonicity Property Testing: Given a black box which outputs $f(x)$ when provided with query $x$, goal is to decide whether $f \in \mathcal{P}$ using as few queries to the black box as possible. Exact property testing requires $\Theta(|D|)$ queries. So, our focus will be on randomized algorithms. ## $\epsilon$-Property Testing Decide whether $f \in \mathcal{P}$ or $f$ is $\epsilon$-far from $\mathcal{P}$. $f$ is $\epsilon$-far from $\mathcal{P}$ iff $\forall g \in \mathcal{P}$, $f$ and $g$ differ at $\geq \epsilon|D|$ values. Query Complexity: $\min_{T \in \text{testers}} \max_{f : D \to R} Q(T, f)$ ## Monotonicity testing upper bounds Boolean case: $D = \{0, 1\}^n, R = \{0, 1\}$ $f : \{0, 1\}^n \to \{0, 1\}$ is monotone iff $f(x_0) \leq f(x_1)$ for all $x_0, x_1$ such that $x_0$ and $x_1$ differ at $i$th bit only and $x_0$ has 0 as $i$th bit and $x_1$ has 1 as the $i$th bit. Theorem: For $\epsilon$-monotone testing (randomized), the upper bound on number of queries is $O(\frac{n}{\epsilon})$. For larger ranges, it is $O(\frac{n\log_2(|R|)}{\epsilon})$ (Proof at the end, if time available) ## Monotonicity testing lower bounds First we will see bounds for larger ranges($|R| = \Omega(n)$). Theorem: For large enough ranges, $R$ and $\epsilon = \frac{1}{8}$ every adaptive monotonicty tester with 2-sided error uses $\Omega(n)$ queries. ### Proof of the theorem We will reduce $\text{DISJ}$ to $\frac{1}{8}$-monotonicity testing. Let $T$ be a randomized tester to distinguish monotone and $\frac{1}{8}$-far from monotone. We will develop a public-coin randomized protocol for $\text{DISJ}$. Let $A, B \subseteq [n]$ and let Alice and Bob have $A$ and $B$ respectively. Lemma: Let $A, B \subseteq [n]$, function $h_{AB} : 2^{[n]} \to \mathbb{Z}$ defined as $$h_{AB}(S) = 2|S| + (-1)^{|S \cap A|} + (-1)^{|S \cap B|}$$ then, 1. $A \cap B = \phi$ implies $h$ is monotone. 2. $A \cap B \neq \phi$ implies $h$ is $\frac{1}{8}$-far from monotone. Let us assume the lemma is true (proof, we will see in sometime) ### Protocol Alice and Bob will agree on random bits and feed those bits to their own copies of the tester $T$. The tester has to decide whether $h_{AB}$ is monotone or $\frac{1}{8}$-far from monotone. The tester will query for $h_{AB}(S)$ for some $S \in \{0, 1\}^n$. Alice and Bob will together decide the value of $h_{AB}(S)$ and feed their testers. 1. Until $T$ halts: • Let $S$ be the query asked by $T$. • Alice sends $(-1)^{|S \cap A|}$ to Bob. (1-bit) • Bob sends $(-1)^{|S \cap B|}$ to Alice. (1-bit) • Both compute $h_{AB}(S)$ and feed their testers. 2. If $T$ accepts $h_{AB}$ then Alice/Bob declare $\text{DISJOINT}$, else they declare $\text{NOT DISJOINT}$ Communication complexity of the protocol $= 2(\text{No. of queries asked by }T)$ Since $\text{DISJ}$ has $\Omega(n)$ public coin randomized complexity, the query complexity of monotone is $\Omega(n)$. ## Proof of the lemma Case (i) $A \cap B = \phi$ Let $S \subseteq [n] - \{i\}$. Since $A$ and $B$ are disjoint, $i \notin$ at least one of $A$ and $B$. $h_{AB}(S \cup \{i\}) - h_{AB}(S) \geq 0$ $\forall i \in [n], S \subseteq [n] - \{i\}$ $\therefore$ $h_{AB}$ is monotone. Case (ii) $A \cap B \neq \phi$ Let $i \in A \cap B$. Consider $S \subseteq [n] - \{{i\}}$ such that $(-1)^{|S \cap A|} = 1$ and $(-1)^{|S \cap B|} = 1$. In this case, $h_{AB}(S \cup \{i\}) - h_{AB}(S) < 0$. There are at least $2^{n-1}/4 = 2^n/8$ possible values for $S$ for a given pair $A, B$. (Reason, next slide) Consider sets $A’ = A - B, B’ = B - A$. $Pr_{S}[|S \cap C| \mod 2 = 0] = \frac{1}{2}$ for any non-empty set $C \subseteq [n]$ Assuming $A’$ and $B’$ to be non-empty. Since $A’, A \cap B, B’$ are all mutually disjoint, we have that the events $(|S \cap A’| \mod 2 = 0)$, $(|S \cap (A \cap B)| \mod 2 = 0)$, $(|S \cap B’| \mod 2 = 0)$ are all mutually independent. $\therefore Pr_{S}((-1)^{|S \cap A|} = 1 \text{ and } (-1)^{|S \cap B|} = 1) = Pr_{S}(|S \cap A’| \mod 2 = 0$ and $|S \cap (A \cap B)| \mod 2 = 0$ and $|S \cap B’| \mod 2 = 0) + Pr_{S}(|S \cap A’| \mod 2 = 1$ and $|S \cap (A \cap B)| \mod 2 = 1$ and $|S \cap B’| \mod 2 = 1) = \frac{1}{2}*\frac{1}{2}*\frac{1}{2} + \frac{1}{2}*\frac{1}{2}*\frac{1}{2} = \frac{1}{4}$ Note: When $A’$ is empty, then both $A \cap B$ and $B’$ must have even number of elements and same probability of 14 will be obtained. For each of the possible pair $(S, S \cup \{i\})$, we have to change one of the values of the function $h_{AB}$ (i.e. either $h_{AB}(S)$ or $h_{AB}(S \cup \{i\})$) to make it monotone. Total no. of such modifications $\geq \frac{2^n}{8}$ $\therefore$ $h_{AB}$ is $\frac{1}{8}$-far from monotone. ### Query complexity when $|R| = \Omega(\sqrt{n})$ Trick is to truncate $h_{AB}$ to obtain $h’_{AB}$ as follows: 1. $h_{AB}(S)$ $< n - c\sqrt{n} \implies h’_{AB} = n - c\sqrt{n}$. 2. $h_{AB}(S)$ $> n + c\sqrt{n} \implies h’_{AB} = n + c\sqrt{n}$. 3. $h’_{AB}(S) = h_{AB}(S)$ in all other cases. Case (i) When $A, B$ are disjoint, truncating gives another monotone function. Case (ii) When $A, B$ are not disjoint, $h’_{AB}$ turns out to be $\frac{1}{16}$-far from monotone. First, see that $h_{AB}$ and $h’_{AB}$ differ in at most $\frac{1}{16}$ entries (using Chebyshev’s inequlaity). Hamming distance obeys triangle inequality. So $h’_{AB}$ is $\frac{1}{16}$-far from montone. ### Query complexity when $|R| = o(\sqrt{n})$ Let $m = |R|^2$ Consider a function $g : [m] \to R$. In this case, we define $h : [n] \to R$ as $h(S) = g(S \cap [m])$ $g$ is monotone $\implies$ $h$ is monotone (Clear from def.). Next, we will show that $g$ is $\epsilon$-far from monotone $\implies$ $h$ is $\epsilon$-far from monotone. Suppose $h$ is not $\epsilon$-far from monotone $\implies$ $\exists$ monotone $h’$ such that $Pr_{X \subseteq [m], Y \subseteq [n]-[m]}[h(X \cup Y) \neq h’(X \cup Y)] \leq \epsilon$. Using averaging argument, $\exists Y_0$ such that $Pr_{X \subseteq [m]}[h(X \cup Y_0) \neq h’(X \cup Y_0)] \leq \epsilon$ Define $g’$ as $g’(X) = h’(X \cup Y_0)$. $\therefore Pr_{X \subseteq [m]}[g(X) \neq g’(X)] \leq \epsilon$ $\implies$ $g$ is also not $\epsilon$-far from monotone. To test $g$, we can test $h$ and return the result. Since $g$ requires $\Omega(m)$ queries, $h$ also requires $\Omega(m) = \Omega(|R|^2)$ queries. ## Monotonicity testing upper bounds Theorem: For $\epsilon$-monotone testing (randomized) and boolean range, the upper bound on number of queries is $O(\frac{n}{\epsilon})$. For larger ranges, it is $O(\frac{n\log_2(|R|)}{\epsilon})$. (Proof on the board) ## References 1. Communication Complexity (for Algorithm Designers), Tim Roughgardden (Chapter 8). 2. Property Testing Lower Bounds Via Communication Complexity, Eric Blais, Joshua Brody, Kevin Matulef (Section 4).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502204656600952, "perplexity": 756.5598851207158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202496.35/warc/CC-MAIN-20190321051516-20190321073516-00530.warc.gz"}
https://physics.stackexchange.com/questions/442289/what-is-the-mass-of-n-a-atoms-of-carbon-12
# What is the mass of $N_A$ atoms of carbon-12? With the recent redefinition of the kilogram, what is the mass of $$N_A$$ (Avogadro's constant) of carbon-12 atoms? $$N_A$$ was defined as exactly 6.02214076×$$10^{23}$$ atoms. Then how close would the mass of $$N_A$$ atoms of carbon-12 be to 12 grams? Is it still true that the mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 12 gram of carbon-12? Is a mole exactly $$N_A$$? • Have you read the relevant sections of physics.stackexchange.com/questions/147433/… ? – PM 2Ring Nov 21 '18 at 7:58 • oh, I don't know why I didn't find that! Thanks. But reading over the section on "The Mole" leaves me puzzled as to the answer to my question ... $M_u=m_u N_A$ is the molar mass constant, which ceases to be fixed and obtains the same uncertainty as mu, equal to 1/12 of the mass of NA carbon-12 atoms. – Sheldon Nov 21 '18 at 10:54 • My take is that a mole is exactly $N_A$, so a mole of $^{12}C$ isn't exactly 12g, instead it's 12.0000000g. – PM 2Ring Nov 21 '18 at 11:06 • I assume it's as close to 12 grams within current experimental error. But still seems weird. Like why even bother defining $N_A\;$ as an exact number ??? – Sheldon Nov 21 '18 at 16:08 The status of the mass of a mole of $$^{12}\rm C$$ in the revised SI is basically identical to that of the magnetic vacuum permeability $$\mu_0$$, which is explored in this and this question. The SI revision does not consist of a re-definition of the kilogram in isolation $$-$$ instead, the full set of the seven base units get redefined, including in particular the mole, which is no longer defined in terms of a fixed value of the molar mass of carbon-12 and which is instead tied to a fixed value of the Avogadro constant. For the full details, see the over-arching Q&A What are the proposed realizations in the New SI for the kilogram, ampere, kelvin and mole? (as well as this table in the Wikipedia page to see how all the relevant units and constants change status), but the short answer to your question Then how close would the mass of $$N_A$$ atoms of Carbon-12 be to 12 grams? is that it will be close, but not exactly equal, and it will be an experimentally-determined quantity with a finite uncertainty in the future. This is spelled out explicitly in the Resolution that implements the revision (which will presumably be named as Resolution 1 of the CGPM 2018), and particularly in its appendices, which contain an explicit definition of the mole, The mole, symbol mol, is the SI unit of amount of substance. One mole contains exactly $$6.022 140 76 \times 10^{23}$$ elementary entities. This number is the fixed numerical value of the Avogadro constant, $$N_A$$, when expressed in the unit $$\rm mol^{–1}$$ and is called the Avogadro number. The amount of substance, symbol $$n$$, of a system is a measure of the number of specified elementary entities. An elementary entity may be an atom, a molecule, an ion, an electron, any other particle or specified group of particles. as well as an explicit comment on the status of the molar mass of carbon-12, the molar mass of carbon 12, $$M(^{12}\mathrm C)$$, is equal to $$0.012 \:\rm kg \: mol^{–1}$$ within a relative standard uncertainty equal to that of the recommended value of $$N_Ah$$ at the time this Resolution was adopted, namely $$4.5 \times 10^{–10}$$, and that in the future its value will be determined experimentally. The appearance of the Planck constant $$h$$ on the relative uncertainty of $$M(^{12}\mathrm C)$$ after the change is directly tied to the fact that the relative uncertainty in the mass of the International Prototype Kilogram immediately after the change is exactly equal to the current relative uncertainty in $$h$$ (because of the way the switch is carried out, and because the second and the meter don't have significant changes). • Thanks for your answer. The uncertainty of $4.5 \times 10^{-10}$ seems very impressive; Wikipedia gives an uncrtainly for the Kibble balance measurement of Plank's constant as about $9 \times 10^{-9}$. – Sheldon Nov 24 '18 at 13:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9309139251708984, "perplexity": 200.11025816755983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00040.warc.gz"}
http://aux.planetmath.org/eulerpath
# Euler path An Euler path in a graph is a path which traverses each edge of the graph exactly once. An Euler path which is a cycle is called an Euler cycle. For loopless graphs without isolated vertices, the existence of an Euler path implies the connectedness of the graph, since traversing every edge of such a graph requires visiting each vertex at least once. If a connected graph has an Euler path, one can be constructed by applying Fleury’s algorithm. A connected graph has an Euler path if it has exactly zero or two vertices of odd degree. If every vertex has even degree, the graph has an Euler cycle. This graph has an Euler cycle. All of its vertices are of even degree. This graph has an Euler path which is not a cycle. It has exactly two vertices of odd degree. Title Euler path EulerPath 2013-03-22 12:02:04 2013-03-22 12:02:04 CWoo (3771) CWoo (3771) 18 CWoo (3771) Definition msc 05C38 EulerCircuit Graph Euler cycle
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9230900406837463, "perplexity": 364.2284483455689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867644.88/warc/CC-MAIN-20180625092128-20180625112128-00048.warc.gz"}
https://deltaepsilons.wordpress.com/2009/07/19/why-simple-modules-are-often-finite-dimensional-i/
## Why simple modules are often finite-dimensional, I July 19, 2009 Posted by Akhil Mathew in algebra. Tags: , , , , Today I want to talk (partially) about a general fact, that first came up as a side remark in the context of my project, and which Dustin Clausen, David Speyer, and I worked out a few days ago.  It was a useful bit of algebra for me to think about. Theorem 1 Let ${A}$ be an associative algebra with identity over an algebraically closed field ${k}$; suppose the center ${Z \subset A}$ is a finitely generated ring over ${k}$, and ${A}$ is a finitely generated ${Z}$-module. Then: all simple ${A}$-modules are finite-dimensional ${k}$-vector spaces. We’ll get to this after discussing a few other facts about rings, interesting in their own right. Generalities Recall an object ${E}$ of an abelian category is simple if any subobject is either zero or isomorphic to ${E}$. In the category of (left) ${R}$-modules for a ring ${R}$, this means the only proper submodule is zero. We prove a general fact: Theorem 2 Let ${R}$ be a ring (always with identity). Then any (nonzero) simple ${R}$-module is isomorphic to ${R/M}$ for ${M \subset R}$ a maximal left ideal. Proof: A submodule of ${R/M}$ would be of the form ${L/M}$ for ${M \subset L}$ and ${L}$ a left ideal, by the isomorphism theorems. But ${M}$ is maximal. So we get one direction. For the other, a simple ${R}$-module ${S}$ is generated by one element ${v \in S}$; just pick any nonzero ${v}$, and note that ${0 \neq Rv \subset S}$. So ${S \simeq R/I}$ for some left ideal ${I}$, the kernel of the surjection ${R \rightarrow S}$ given by ${x \rightarrow xv}$. If ${I}$ isn’t maximal, it’s contained in a maximal left ideal ${M}$, and we have ${0 \neq M/I \neq R/I}$, so ${R/I}$ isn’t simple. $\Box$ The Nullstellensatz We want to consider the case ${A=Z}$ of the initial fact. So we have a finitely generated commutative ring ${A}$, and we will to show that its simple modules are one-dimensional. In other words, for any maximal ideal ${M}$, we have ${A/M \simeq k}$. Theorem 3 Hypotheses as above, we have ${A/M \simeq k}$ for any maximal ideal ${M}$. In detail, if ${A}$ is a finitely generated commutative ring over the algebraically closed field ${k}$, then ${A/M \simeq k}$ for any maximal ideal ${M}$. I will discuss a more concrete setting that may clarify it: Since ${A}$ is commutative and finitely generated, we can write ${A = k[x_1, \dots, x_n]/I}$ for some maximal ideal ${I}$; let ${f: k[x_1, \dots, x_n] \rightarrow A}$ be the reduction map, and let ${N = f^{-1}(M)}$; then we have, by the isomorphism theorems $\displaystyle A/M \simeq k[x_1, \dots, x_n]/(N+I);\ \ \ \ \ (1)$ this is a field, so ${N+I}$ is actually maximal. So we only need to consider the right hand side, i.e. find the maximal ideals in ${k[x_1, \dots, x_n]}$. The following two results will tells us what they are: Theorem 4 (Hilbert’s Basis Theorem) The ring ${k[x_1, \dots, x_n]}$ is Noetherian, i.e. each ideal ${J \subset x_1, \dots, x_n}$ is finitely generated: ${J}$ can be written as ${J = (P_1, \dots, P_r)}$ for some polynomials ${P_1, \dots, P_r}$. This might actually be a useful topic for a future post, but for now, I’ll simply quote it without proof. Theorem 5 (The Weak Nullstellensatz) Let ${f_1, \dots, f_k}$ be polynomials in ${n}$ variables ${x_1, \dots, x_n}$, over the fixed algebraically closed field ${k}$. Suppose ${f_1, \dots, f_k}$ have no common zero. Then the ideal ${(f_1, \dots, f_k)=(1)}$, i.e. there are polynomials ${g_1, \dots, g_k}$ such that $\displaystyle \sum g_i f_i = 1.$ So let’s see how Theorem 3 follows from these two results. Indeed, I claim that a maximal ideal ${J}$ of ${k[x_1, \dots, x_n]}$ is of the form ${(x_1-a_1, \dots, x_n-a_n)}$; then in (1) we will see ${A/M \simeq k}$. First of all, we have ${J = (f_1, \dots, f_k)}$ for some polynomials ${f_1, \dots, f_k}$ by the basis theorem. Next ${f_1, \dots, f_k}$ must have a common zero, otherwise ${J=(1)}$. Let the common zero be ${a_1, \dots, a_n}$. Then each ${f_j \subset (x_1-a_1, \dots, x_n-a_n)}$. This proves the claim. Actually proving the Nullstellensatz could make another interesting algebraic post. For now, I’ll recommend David Speyer’s interpretation of it, and Terence Tao’s elementary proof. In the next post, I’ll apply what we’ve discussed so far to prove our aims.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 76, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9718356132507324, "perplexity": 133.10455375688605}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737893676.56/warc/CC-MAIN-20151001221813-00079-ip-10-137-6-227.ec2.internal.warc.gz"}
http://hal.in2p3.fr/in2p3-00444084
# Search for the standard model Higgs boson in the ZH->vvbb channel in 5.2 fb-1 of p-pbar collisions at sqrt(s)=1.96 TeV Abstract : A search is performed for the standard model Higgs boson in 5.2 fb-1 of p-pbar collisions at sqrt(s)=1.96 TeV, collected with the D0 detector at the Fermilab Tevatron Collider. The final state considered is a pair of b jets and large missing transverse energy, as expected from p-pbar->ZH->vvbb production. The search is also sensitive to the WH->lvbb channel when the charged lepton is not identified. For a Higgs boson mass of 115 GeV, a limit is set at the 95% C.L. on the cross section multiplied by branching fraction for p-pbar->(Z/W)H and H->bb that is a factor of 3.7 larger than the standard model value, consistent with the factor of 4.6 expected. Document type : Journal articles http://hal.in2p3.fr/in2p3-00444084 Contributor : Emmanuelle Vernay <> Submitted on : Tuesday, January 5, 2010 - 4:08:03 PM Last modification on : Tuesday, April 20, 2021 - 12:00:10 PM ### Citation V.M. Abazov, B. Abbott, M. Abolins, B.S. Acharya, M. Adams, et al.. Search for the standard model Higgs boson in the ZH->vvbb channel in 5.2 fb-1 of p-pbar collisions at sqrt(s)=1.96 TeV. Physical Review Letters, American Physical Society, 2010, 104, pp.071801. ⟨10.1103/PhysRevLett.104.071801⟩. ⟨in2p3-00444084⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9845250844955444, "perplexity": 4334.266571188911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00494.warc.gz"}
https://computergraphics.stackexchange.com/questions/4768/energy-conservation-of-brdf
# Energy conservation of BRDF Here is the part confusing me: 1. What do the second equation and $\delta$-function mean? 2. Why the third equation is a sufficient condition even though a reason is given? • I'm working through the same book and coincidentally went through that chapter (again) in the last few days. Are you aware of the UC Davis lectures on YouTube that use this book? Unfortunately the lecturer doesn't address your questions specifically in the BRDF lecture. I was formulating an answer for you but I'm not fully understanding it either, so I'll not reply as I don't want to misinform. I will try to write my thoughts about it when I have time though... – PeteUK Feb 24 '17 at 23:12 You are right to be confused. What I think they should have written: $$L( x \leftarrow \Psi ) = L_{in} \delta(\Psi - \alpha)$$ using $\alpha$ instead of $\Theta$, which is already used as a dummy variable in the integral. You should look up the Dirac delta function to learn its meaning and properties. In this context, you can imagine the $L$ above as representing a very concentrated beam (a laser) coming from the angle $\alpha$. Practically, to do the integral over $\Psi$ with $\delta (\Psi - \alpha)$ present in the integrand, remove the integration and replace all occurrences of $\Psi$ with $\alpha$. Then it should be clear how they arrive at the next line, which should read, for all $\alpha$: $$\int f_r(x, \alpha \rightarrow \Theta) \; \cos(N_x, \Theta)\; d \omega_\Theta \leq 1$$. The fact that this condition is sufficient follows from two facts: 1. that any function (e.g. $L$) can be approximated by a sum of many $\delta$ functions, and 2. that everything is linear. In other words, if I write $N(L)$ and $D(L)$ for the numerator and denominator of the left hand side of (2.21), then you can see that if $L = L_1 + L_2$, then $N(L_1+L_2) = N(L_1) + N(L_2)$ and $D(L_1+L_2) = D(L_1) + D(L_2)$. So if I know $N(L_1) \leq D(L_1)$ and $N(L_2) \leq D(L_2)$, then I know $N(L) \leq D(L)$ since $$N(L) = N(L_1) + N(L_2) \leq D(L_1) + D(L_2) = D(L)$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8920018672943115, "perplexity": 175.955786008702}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525699.51/warc/CC-MAIN-20190718170249-20190718192249-00158.warc.gz"}
http://mathhelpforum.com/discrete-math/185402-r-combinations-repetition-allowed.html
# Thread: r-Combinations with Repetition Allowed 1. ## r-Combinations with Repetition Allowed Another way to count the number of nonnegative integral solutions to an equation of the form x1+x2+...+xn=m is to reduce the problem to one of finding the number of n-tuples (y1,y2,...,yn) with 0<=y1<=y2<=...<=yn<=m. The reduction results from letting yi=x1+x2+...+xi for each i=1,2,...,n. Use this approach to derive a general formula for the number of nonnegative integral solutions to x1+x2+...+xn=m. I'm not even sure where to start with this problem. I would appreciate any help. 2. ## Re: r-Combinations with Repetition Allowed Originally Posted by lovesmath Another way to count the number of nonnegative integral solutions to an equation of the form x1+x2+...+xn=m is to reduce the problem to one of finding the number of n-tuples (y1,y2,...,yn) with 0<=y1<=y2<=...<=yn<=m. The reduction results from letting yi=x1+x2+...+xi for each i=1,2,...,n. Use this approach to derive a general formula for the number of nonnegative integral solutions to x1+x2+...+xn=m. I'm not even sure where to start with this problem. I would appreciate any help. Where did you get this problem? It is really puzzling. The solution to “counting the number of nonnegative integral solutions to an equation of the form $x_1+x_2+\cdots+x_n=m$ is so easy and easily derived. It is $\binom{m+n-1}{m}$. I do not understand what this question means really. 3. ## Re: r-Combinations with Repetition Allowed It is from a Discrete Math textbook. I am familiar with the formula (m + n - 1, m); I just wasn't sure how to "derive" it. Thanks for your help! 4. ## Re: r-Combinations with Repetition Allowed Originally Posted by lovesmath It is from a Discrete Math textbook. I am familiar with the formula (m + n - 1, m); I just wasn't sure how to "derive" it. Thanks for your help! Which textbook? 5. ## Re: r-Combinations with Repetition Allowed Discrete Mathematics with Applications, 4th edition. 6. ## Re: r-Combinations with Repetition Allowed Originally Posted by lovesmath Discrete Mathematics with Applications, 4th edition. I no longer have Ken Rosen's Fourth edition. But I see in the Fifth edition he has changed the question. He wants you to show that there is a one-to-one correspondence between the set of r-combinations of the set $S=\{1,2,\cdots,n\}$ and the set of r-combinations of the set $T=\{1,2,\cdots,n+r-1\}$ I hope that helps you. But I will not be a part of reinventing the wheel.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9619930982589722, "perplexity": 399.01004307728635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522205.7/warc/CC-MAIN-20171213065419-20171213085419-00636.warc.gz"}
https://learncybers.com/partial-derivative/
To advertise with us contact on Whatsapp: +923041280395 For guest post email at: [email protected] # Partial Derivative ## Partial Derivative Definition: Partial derivatives are defined as derivatives of a function of multiple variables when all but the variable of interest is held fixed during the differentiation. Let f(x,y) be a function with two variables. If we keep y constant and differentiate f (assuming f is differentiable) with respect to the variable x, using the rules and formulas of differentiation, we obtain what is called the partial derivative of ‘f’ with respect to x which is denoted by Similarly, If we keep x constant and differentiate f (assuming f is differentiable) with respect to the variable y, we obtain what is called the partial derivative of ‘f’ with respect to y which is denoted by Examples: Example# 1: Find the partial derivatives fx and fy if f(x , y) is given by Solution of example# 1: Assume ‘y’ is constant and differentiate with respect to x to obtain Now assume x is constant and differentiate with respect to y to obtain Example# 2: Find fx and fy if f(x , y) is given by Solution to Example 2: Differentiate with respect to x assuming y is constant Differentiate with respect to y assuming x is constant Example# 3: Find fx and fy if f(x , y) is given by Solution to Example 3: Differentiate with respect to x assuming y is constant using the product rule of differentiation. Differentiate with respect to y assuming x is constant. Example# 4: Find fx and fy if f(x , y) is given by Solution to Example 4: Differentiate with respect to x to obtain Differentiate with respect to y Example# 5: Find fx(2 , 3) and fy(2 , 3) if f(x , y) is given by Solution to Example 5: We first find the partial derivatives fx and fy fx(x,y) = 2x y fy(x,y) = x2 + 2 We now calculate fx(2 , 3) and fy(2 , 3) by substituting x and y by their given values fx(2,3) = 2 (2)(3) = 12 fy(2,3) = 22 + 2 = 6 ### Phrasing and notation Here are some of the phrases you might hear in reference to /f operation: • “The partial derivative of  with respect to ” • “Del f, del x” • “Partial f, partial x” • “The partial derivative (of ‘) in the ‘-direction” ### Alternate notation: In the same way that people sometimes prefer to write  instead of f / d xwe have the following notation: fx ↔∂x/f fy ↔ ∂f/ y F(some variable)  ↔ ∂f∂ (That same variable A more formal definition: Although thinking of dx or ∂x as really tiny changes in the value of x is a useful intuition, it is healthy to occasionally step back and remember that defining things precisely requires introducing limits. After all, what specific small value would ∂x be? One one hundredth? One one millionth? 10^10^10? The point of calculus is that we don’t use any one tiny number, but instead consider all possible values and analyze what tends to happen as they approach a limiting value. The single variable derivative, for example, is defined like this: dx/df (x0) = h→0lim       ( h(x0 +h)−f(x0 ))  h • h represents the “tiny value” that we intuitively think of as dx • The h→0 under the limit indicates that we care about very small values of h, those approaching 0. • f(x0 +h)−f(x0 ) is the change in the output that results from adding h to the input, which is what we think of as df Formally defining the partial derivative looks almost identical. If x,y…), is a function with multiple inputs, here’s how that looks: f  (x0 ,y0 ,…)  = h→0lim (h(x0 +hy0  ,…) − (x0 ,y0 ,…))   /h The point is that h, which represents a tiny tweak to the input, is added to different input variables depending on which partial derivative we are taking. People will often refer to this as the limit definition of a partial derivative. You may be interested in Matrix Determinant Calculator. ## Second Partial Derivative: A brief overview of second partial derivative, the symmetry of mixed partial derivatives, and higher order partial derivatives. ### Notations of Second Order Partial Derivatives: For a two variable function f(x , y), we can define 4 second order partial derivatives along with their notations. ### Examples with Detailed solutions: Example# 1 Find fxx, fyy given that f(x , y) = sin (x y) Solution fxx may be calculated as follows: fxx = ∂2f / ∂x2 = ∂(∂f / ∂x) / ∂x = ∂(∂[ sin (x y) ]/ ∂x) / ∂x = ∂(y cos (x y) ) / ∂x = – y2 sin (x y) ) fyy can be calculated as follows: fyy = ∂2f / ∂y2 = ∂(∂f / ∂y) / ∂y = ∂(∂[ sin (x y) ]/ ∂y) / ∂y = ∂(x cos (x y) ) / ∂y = – x2 sin (x y) Example# 2 Find fxx, fyy, fxy, fyx given that f(x , y) = x3 + 2 x y. Solution fxx is calculated as follows: fxx = ∂2f / ∂x2 = ∂(∂f / ∂x) / ∂x = ∂(∂[ x3 + 2 x y ]/ ∂x) / ∂x = ∂( 3 x2 + 2 y ) / ∂x = 6x fyy is calculated as follows: fyy = ∂2f / ∂y2 = ∂(∂f / ∂y) / ∂y = ∂(∂[ x3 + 2 x y ]/ ∂y) / ∂y = ∂( 2x ) / ∂y = 0 fxy is calculated as follows: fxy = ∂2f / ∂y∂x = ∂(∂f / ∂x) / ∂y = ∂(∂[ x3 + 2 x y ]/ ∂x) / ∂y = ∂( 3 x2 + 2 y ) / ∂y = 2 fyx is calculated as follows: fyx = ∂2f / ∂x∂y = ∂(∂f / ∂y) / ∂x = ∂(∂[ x3 + 2 x y ]/ ∂y) / ∂x = ∂( 2x ) / ∂x = 2 Example# 3 Find fxx, fyy, fxy, fyx given that f(x , y) = x3y4 + x2 y. Solution: fxx is calculated as follows: fxx = ∂2f / ∂x2 = ∂(∂f / ∂x) / ∂x = ∂(∂[ x3y4 + x2 y ]/ ∂x) / ∂x = ∂( 3 x2y4 + 2 x y) / ∂x = 6x y4 + 2y fyy is calculated as follows: fyy = ∂2f / ∂y2 = ∂(∂f / ∂y) / ∂y = ∂(∂[ x3y4 + x2 y ]/ ∂y) / ∂y = ∂( 4 x3y3 + x2 ) / ∂y = 12 x3y2 fxy is calculated as follows: fxy = ∂2f / ∂y∂x = ∂(∂f / ∂x) / ∂y = ∂(∂[ x3y4 + x2 y ]/ ∂x) / ∂y = ∂( 3 x2y4 + 2 x y ) / ∂y = 12 x2y3 + 2 x fyx is calculated as follows: fyx = ∂2f / ∂x∂y = ∂(∂f / ∂y) / ∂x = ∂(∂[ x3y4 + x2 y ]/ ∂y) / ∂x = ∂(4 x3y3 + x2) / ∂x = 12 x2y3 + 2x The gradient stores all the partial derivative information of a multivariable function.  But it’s more than a mere storage device, it has several wonderful interpretations and many, many uses. Example: The gradient of a function f, denoted as ∇f, is the collection of all its partial derivatives into a vector. This is most easily understood with an example. Let f(x,y)= x^2y. Find ∇f(3,2). Solution:  The gradient is just the vector of partial derivatives. The partial derivatives of f, at the point (x,y)=(3,2) are: ∂f/∂x (x,y) = 2xy ∂f/∂x (3,2) = 12 ∂f/∂y (x,y) = x2 ∂f/∂y (3,2) = 9 Therefore, the gradient is ∇f(3,2) = 12i+9j = (12,9).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9230517745018005, "perplexity": 2031.1935872943832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154127.53/warc/CC-MAIN-20210731234924-20210801024924-00606.warc.gz"}
https://indico.cern.ch/event/35523/contributions/839877/
# CHEP 2009 Mar 21 – 27, 2009 Prague Europe/Prague timezone ## Alignment of the LHCb detector with Kalman fitted tracks Mar 24, 2009, 8:00 AM 1h Prague #### Prague Prague Congress Centre 5. května 65, 140 00 Prague 4, Czech Republic Board: Tuesday 011 poster Event Processing ### Speakers Jan Amoraal (NIKHEF) Wouter Hulsbergen (NIKHEF) ### Description We report on an implementation of a global chisquare algorithm for the simultaneous alignment of all tracking systems in the LHCb detector. Our algorithm uses hit residuals from the standard LHCb track fit which is based on a Kalman filter. The algorithm is implemented in the LHCb reconstruction framework and exploits the fact that all sensitive detector elements have the same geometry interface. A vertex constraint is implemented by fitting tracks to a common point and propagating the change in track parameters to the hit residuals. To remove unconstrained or poorly constrained degrees of freedom (so-called weak modes) the average movements of (subsets of) alignable detector elements can be fixed with Lagrange constraints. Alternatively, weak modes can be removed with a cutoff in the eigenvalue spectrum of the second derivative of the chisquare. As for all LHCb reconstruction and analysis software the configuration of the algorithm is done in python and gives detailed control over the selection of alignable degrees of freedom and constraints. The study the performance of the algorithm on simulated events and first LHCb data. Poster
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152610063552856, "perplexity": 2710.491238103354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360951.9/warc/CC-MAIN-20211201203843-20211201233843-00567.warc.gz"}
https://www.physicsforums.com/threads/magnetic-force-experiment.282294/
# Magnetic Force experiment 1. Dec 31, 2008 ### MagElectPhys Hi, I'm buying magnets for a required experiment in my magnetic and electricity physics class. The supplier gives to important pieces of information: BrMax: 3850 gauss and Pull Force: 33lbs. I was wondering if there is a relatively simple formula so that I can calculate the force of the magnet at point x fromt he magnet using the gauss value. Essentially, I want to calculate the force of the magnet in Newtons. Thanks. 2. Dec 31, 2008 ### Gnosis Divide pounds by .2248 and it will yield Newtons. 33 pounds / .2248 = 146.797153 Newtons Multiplying Newtons by .2248 yields pounds. 3. Jan 1, 2009 ### MagElectPhys Yes, I understandt hat but I want to know how to use Gauss to determine force in Newtons, if possible.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.953368604183197, "perplexity": 2193.2037778981776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647299.37/warc/CC-MAIN-20180320052712-20180320072712-00152.warc.gz"}
http://math.stackexchange.com/questions/108284/example-of-a-closed-subspace-of-a-banach-space-which-is-not-complemented
# Example of a closed subspace of a Banach space which is not complemented? In this post, all vector spaces are assumed to be real or complex. Let $(X, ||\cdot||)$ be a Banach space, $Y \subset X$ a closed subspace. $Y$ is called $\underline{\mathrm{complemented}}$, if there is a closed subspace $Z \subset X$ such that $X =Y \oplus Z$ as topological vector spaces. If $H$ is a Hilbert space every closed subspace $Y$ is complemented; the orthogonal complement $Y^{\bot}$ is a closed subspace of $H$ and we have $H=Y \oplus Y^{\bot}$. A famous theorem of Lindenstrauß and Tzafriri (which can be found in their article "On the complemented subspaces problem", Isreal Journal of Mathematics, Vol. 9, No.2, pp. 263-269) asserts that the converse is true as well. More precisely, if $(X, ||\cdot||)$ is a Banach space such that every closed subspace is complemented then $||\cdot||$ is induced by a scalarproduct, i.e. $(X,||\cdot||)$ is a Hilbert space. Now to my question. Can you give me an example of a Banach space $(X,||\cdot||)$, which is not a Hilbert space, and of a closed subspace $Y \subset X$ which is not complemented? It is easily seen that $Y$ must be both infinite-dimensional and infinite-codimensional, for every finite-dimensional and every (closed) finite-codimensional subspace is complemented. I thought about something like $c_{0} \subset (\ell^{\infty}, ||\cdot||_{\infty})$ the closed subspace of null sequences in the Banach space of bounded sequences but couldn't produce a proof that no closed complement exists in that case. Can you help me either proving that $c_{0}$ is not complemented (if that's true at all) or by giving me a different example? - Show that a complement to $c_0$ contains a sequence of bounded linear functionals which separates points, while $\ell^\infty / c_0$ doesn't. –  Mark Feb 11 '12 at 22:23 So you mean when I consider $\ell^{\infty}$ as the dual of $\ell^{1}$? I will try this one out. Thanks! I might come back to you, when I'm stuck. –  Nils Matthes Feb 12 '12 at 13:46 Your suspicion about $c_0$ is correct. A couple of other examples: The disc algebra (those functions in $C(\mathbb{T})$ which are restrictions of functions analytic in the open unit disc) is closed in $C(\mathbb{T})$ but not complemented. Similarly, in $L^1(\mathbb{T})$, the subspace $H^1(\mathbb{T})$ consisting of functions whose negative Fourier coefficients vanish is closed but not complemented. See Rudin's Functional Analysis (the proof isn't very easy).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265194535255432, "perplexity": 148.2568997813206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095677.90/warc/CC-MAIN-20150627031815-00280-ip-10-179-60-89.ec2.internal.warc.gz"}
https://wiki.math.wisc.edu/index.php?title=Colloquia_2012-2013&diff=next&oldid=3196
# Difference between revisions of "Colloquia 2012-2013" Jump to navigation Jump to search # Mathematics Colloquium All colloquia are on Fridays at 4:00 pm in Van Vleck B239, unless otherwise indicated. ## Fall 2011 date speaker title host(s) Sep 9 Manfred Einsiedler (ETH-Zurich) Periodic orbits on homogeneous spaces Fish Sep 16 Richard Rimanyi (UNC-Chapel Hill) Global singularity theory Maxim Sep 23 Andrei Caldararu (UW-Madison) The Pfaffian-Grassmannian derived equivalence (local) Sep 30 Scott Armstrong (UW-Madison) Optimal Lipschitz extensions, the infinity Laplacian, and tug-of-war games (local) Oct 7 Hala Ghousseini (University of Wisconsin-Madison) Developing Mathematical Knowledge for Teaching in, from, and for Practice Lempp Oct 14 Alex Kontorovich (Yale) On Zaremba's Conjecture Shamgar oct 19, Wed Bernd Sturmfels (UC Berkeley) Convex Algebraic Geometry distinguished lecturer Shamgar oct 20, Thu Bernd Sturmfels (UC Berkeley) Quartic Curves and Their Bitangents distinguished lecturer Shamgar oct 21 Bernd Sturmfels (UC Berkeley) Multiview Geometry distinguished lecturer Shamgar Oct 28 Roman Holowinsky (OSU) Equidistribution Problems and L-functions Street Nov 4 Sijue Wu (U Michigan) Wellposedness of the two and three dimensional full water wave problem Qin Li Nov 7, Mo, 3pm, SMI 133 Sastry Pantula (NSCU and DMS/NSF) Opportunities in Mathematical and Statistical Sciences at DMS Joint Math/Stat Colloquium Nov 11 Henri Berestycki (EHESS and University of Chicago) Reaction-diffusion equations and propagation phenomena Wasow lecture Nov 16, Wed Henry Towsner (U of Conn-Storrs) An Analytic Approach to Uniformity Norms Steffen Nov 18 Benjamin Recht (UW-Madison, CS Department) The Convex Geometry of Inverse Problems Jordan Nov 22, Tue, 2:30PM, B205 Zhiwei Yun (MIT) Motives and the inverse Galois problem Tonghai Nov 28, Mon, 4PM Burglind Joricke (Institut Fourier, Grenoble) Analytic knots, satellites and the 4-ball genus Gong Nov 29, Tue, 2:30PM, B102 Isaac Goldbring (UCLA) "Nonstandard methods in Lie theory" Lempp Nov 30, Wed, 4PM Bing Wang (Simons Institute) Uniformization of algebraic varieties Sean Dec 2 Robert Dudley (University of California, Berkeley) From Gliding Ants to Andean Hummingbirds: The Evolution of Animal Flight Performance Jean-Luc Dec 5, Mon, 2:25PM, Room 901 Dima Arinkin (UNC-Chapel Hill) Autoduality of Jacobians for singular curves Andrei Dec 7, Wed, 4PM Toan Nguyen (Brown University) On the stability of Prandtl boundary layers and the inviscid limit of the Navier-Stokes equations Misha Feldman Dec 9 Xinwen Zhu (Harvard University) Adelic uniformization of moduli of G-bundles Tonghai Dec 12, Mon, 4PM Jonathan Hauenstein (Texas A&M) Numerical solving of polynomial equations and applications Thiffeault ## Spring 2012 date speaker title host(s) Jan 26, Thu Peter Constantin (University of Chicago) TBA distinguished lecturer Jan 27 Peter Constantin (University of Chicago) TBA distinguished lecturer Feb 3 Scheduled Street Feb 24 Malabika Pramanik (University of British Columbia) TBA Benguria March 2 Guang Gong (University of Waterloo) TBA Shamgar March 16 Charles Doran (University of Alberta) TBA Matt Ballard March 23 Martin Lorenz (Temple University) TBA Don Passman March 30 Paolo Aluffi (Florida State University) TBA Maxim April 6 Spring recess April 13 Ricardo Cortez (Tulane) TBA Mitchell April 18 Benedict H. Gross (Harvard) TBA distinguished lecturer April 19 Benedict H. Gross (Harvard) TBA distinguished lecturer April 20 Robert Guralnick (University of South California) TBA Shamgar April 27 Tentatively Scheduled Street May 4 Mark Andrea de Cataldo (Stony Brook) TBA Maxim May 11 Tentatively Scheduled Shamgar ## Abstracts ### Fri, Sept 9: Manfred Einsiedler (ETH-Zurich) Periodic orbits on homogeneous spaces We call an orbit xH of a subgroup H<G on a quotient space Gamma \ G periodic if it has finite H-invariant volume. These orbits have intimate connections to a variety of number theoretic problems, e.g. both integer quadratic forms and number fields give rise periodic orbits and these periodic orbits then relate to local-global problems for the quadratic forms or to special values of L-functions. We will discuss whether a sequence of periodic orbits equidistribute in Gamma \ G assuming the orbits become more complicated (which can be measured by a discriminant). If H is a diagonal subgroup (also called torus or Cartan subgroup), this is not always the case but can be true with a bit more averaging. As a theorem of Mozes and Shah show the case where H is generated by unipotents is well understand and is closely related to the work of M. Ratner. We then ask about the rate of approximation, where the situation is much more complex. The talk is based on several papers which are joint work with E.Lindenstrauss, Ph. Michel, and A. Venkatesh resp. with G. Margulis and A. Venkatesh. ### Fri, Sept 16: Richard Rimanyi (UNC) Global singularity theory The topology of the spaces A and B may force every map from A to B to have certain singularities. For example, a map from the Klein bottle to 3-space must have double points. A map from the projective plane to the plane must have an odd number of cusp points. To a singularity one may associate a polynomial (its Thom polynomial) which measures how topology forces this particular singularity. In the lecture we will explore the theory of Thom polynomials and their applications in enumerative geometry. Along the way, we will meet a wide spectrum of mathematical concepts from geometric theorems of the ancient Greeks to the cohomology ring of moduli spaces. ### Fri, Sept 23: Andrei Caldararu (UW-Madison) The Pfaffian-Grassmannian derived equivalence String theory relates certain seemingly different manifolds through a relationship called mirror symmetry. Discovered about 25 years ago, this story is still very mysterious from a mathematical point of view. Despite the name, mirror symmetry is not entirely symmetric -- several distinct spaces can be mirrors to a given one. When this happens it is expected that certain invariants of these "double mirrors" match up. For a long time the only known examples of double mirrors arose through a simple construction called a flop, which led to the conjecture that this would be a general phenomenon. In joint work with Lev Borisov we constructed the first counterexample to this, which I shall present. Explicitly, I shall construct two Calabi-Yau threefolds which are not related by flops, but are derived equivalent, and therefore are expected to arise through a double mirror construction. The talk will be accessible to a wide audience, in particular to graduate students. There will even be several pictures! ### Fri, Sept 30: Scott Armstrong (UW-Madison) Optimal Lipschitz extensions, the infinity Laplacian, and tug-of-war games Given a nice bounded domain, and a Lipschitz function defined on its boundary, consider the problem of finding an extension of this function to the closure of the domain which has minimal Lipschitz constant. This is the archetypal problem of the calculus of variations "in the sup-norm". There can be many such minimal Lipschitz extensions, but there is there is a unique minimizer once we properly "localize" this Lipschitz minimizing property. This minimizer is characterized by the infinity Laplace equation: the Euler-Lagrange equation for our optimization problem. This PDE is a very highly degenerate nonlinear elliptic equation which does not possess smooth solutions in general. In this talk I will discuss what we know about the infinity Laplace equation, what the important open questions are, and some interesting recent developments. We will even play a probabilistic game called "tug-of-war". ### Fri, Oct 7: Hala Ghousseini (University of Wisconsin-Madison) Developing Mathematical Knowledge for Teaching in, from, and for Practice Recent research in mathematics education has established that successful teaching requires a specialized kind of professional knowledge known as Mathematical Knowledge for Teaching (MKT). The mathematics education community, however, is beginning to appreciate that to be effective, teachers not only need to know MKT but also be able to use it in interaction with students (Hill & Ball, 2010). Very few examples exist at the level of actual practice of how novice teachers develop such knowledge for use. I will report on my current work on the Learning in, from, and for Practice project to develop, implement, and study what mathematics teacher educators can do to support novice teachers in acquiring and using Mathematical Knowledge for Teaching. ### Fri, Oct 14: Alex Kontorovich (Yale) On Zaremba's Conjecture It is folklore that modular multiplication is "random". This concept is useful for many applications, such as generating pseudorandom sequences, or in quasi-Monte Carlo methods for multi-dimensional numerical integration. Zaremba's theorem quantifies the quality of this "randomness" in terms of certain Diophantine properties involving continued fractions. His 40-year old conjecture predicts the ubiquity of moduli for which this Diophantine property is uniform. It is connected to Markoff and Lagrange spectra, as well as to families of "low-lying" divergent geodesics on the modular surface. We prove that a density one set satisfies Zaremba's conjecture, using recent advances such as the circle method and estimates for bilinear forms in the Affine Sieve, as well as a "congruence" analog of the renewal method in the thermodynamical formalism. This is joint work with Jean Bourgain. ### Wed, Oct 19: Bernd Sturmfels (Berkeley) Convex Algebraic Geometry This lecture concerns convex bodies with an interesting algebraic structure. A primary focus lies on the geometry of semidefinite optimization. Starting with elementary questions about ellipses in the plane, we move on to discuss the geometry of spectrahedra, orbitopes, and convex hulls of real varieties. ### Thu, Oct 20: Bernd Sturmfels (Berkeley) Quartic Curves and Their Bitangents We present a computational study of plane curves of degree four, with primary focus on writing their defining polynomials as sums of squares and as symmetric determinants. Number theorists will enjoy the appearance of the Weyl group $\displaystyle{ E_7 }$ as the Galois group of the 28 bitangents. Based on joint work with Daniel Plaumann and Cynthia Vinzant, this lecture spans a bridge from 19th century algebra to 21st century optimization. ### Fri, Oct 21: Bernd Sturmfels (Berkeley) Multiview Geometry The study of two-dimensional images of three-dimensional scenes is foundational for computer vision. We present work with Chris Aholt and Rekha Thomas on the polynomials characterizing images taken by $\displaystyle{ n }$ cameras. Our varieties are threefolds that vary in a family of dimension $\displaystyle{ 11n-15 }$ when the cameras are moving. We use toric geometry and Hilbert schemes to characterize degenerations of camera positions. ### Fri, Oct 28: Roman Holowinsky (OSU) Equidistribution Problems and L-functions There are several equidistribution problems of arithmetic nature which have had shared interest between the fields of Ergodic Theory and Number Theory. The relation of such problems to homogeneous flows and the reduction to analysis of special values of automorphic L-functions has resulted in increased collaboration between these two fields of mathematics. We will discuss two such equidistribution problems: the equidistribution of Heegner points for negative quadratic discriminants and the equidistribution of mass of Hecke eigenforms. Equidistribution follows upon establishing subconvexity bounds for the associated L-functions and are fine examples as to why one might be interested in such objects. ### Fri, Nov 4: Sijue Wu (U Michigan) Wellposedness of the two and three dimensional full water wave problem We consider the question of global in time existence and uniqueness of solutions of the infinite depth full water wave problem. We show that the nature of the nonlinearity of the water wave equation is essentially of cubic and higher orders. For any initial data that is small in its kinetic energy and height, we show that the 2-D full water wave equation is uniquely solvable almost globally in time. And for any initial interface that is small in its steepness and velocity, we show that the 3-D full water wave equation is uniquely solvable globally in time. ### Mo, Nov 7: Sastry Pantula (DMS/NSF, NCSU) Opportunities in Mathematical and Statistical Sciences at DMS In this talk, I will give you an overview of the funding and other opportunities at DMS for mathematicians and statisticians. I will also talk about our new program in computational and data-enabled science and engineering in mathematical and statistical sciences (CDS&E-MSS). ### Fri, Nov 11: Henri Berestycki (EHESS and University of Chicago) Reaction-diffusion equations and propagation phenomena Starting with the description of reaction-diffusion mechanisms in physics, biology and ecology, I will explain the motivation for this class of non-linear partial differential equations and mention some of the interesting history of these systems. Then, I will review classical results in the homogeneous setting and discuss their relevance. The second part of the lecture will be concerned with recent developments in non-homogeneous settings, in particular for Fisher-KPP type equations. Such problems are encountered in models from ecology. The mathematical theory will be seen to shed light on questions arising in this context. ### Wed, Nov 16: Henry Towsner (U of Conn-Storrs) An Analytic Approach to Uniformity Norms The Gowers uniformity norms have proven to be a powerful tool in extremal combinatorics, and a number of "structure theorems" have been given showing that the uniformity norms provide a dichotomy between "structured" objects and "random" objects. While analogous norms (the Gowers-Host-Kra norms) exist in dynamical systems, they do not quite correspond to the uniformity norms in the finite setting. We describe an analytic approach to the uniformity norms in which the "correspondence principle" between the finite setting and the infinite analytic setting remains valid. ### Fri, Nov 18: Ben Recht (UW-Madison) The Convex Geometry of Inverse Problems Deducing the state or structure of a system from partial, noisy measurements is a fundamental task throughout the sciences and engineering. The resulting inverse problems are often ill-posed because there are fewer measurements available than the ambient dimension of the model to be estimated. In practice, however, many interesting signals or models contain few degrees of freedom relative to their ambient dimension: a small number of genes may constitute the signature of a disease, very few parameters may specify the correlation structure of a time series, or a sparse collection of geometric constraints may determine a molecular configuration. Discovering, leveraging, or recognizing such low-dimensional structure plays an important role in making inverse problems well-posed. In this talk, I will propose a unified approach to transform notions of simplicity and latent low-dimensionality into convex penalty functions. This approach builds on the success of generalizing compressed sensing to matrix completion, and greatly extends the catalog of objects and structures that can be recovered from partial information. I will focus on a suite of data analysis algorithms designed to decompose general signals into sums of atoms from a simple---but not necessarily discrete---set. These algorithms are derived in a convex optimization framework that encompasses previous methods based on l1-norm minimization and nuclear norm minimization for recovering sparse vectors and low-rank matrices. I will provide sharp estimates of the number of generic measurements required for exact and robust recovery of a variety of structured models. I will then detail several example applications and describe how to scale the corresponding inference algorithms to massive data sets. ### Tue, Nov 22: Zhiwei Yun (MIT) "Motives and the inverse Galois problem" We will use geometric Langlands theory to solve two problems simultaneously. One is Serre's question about whether there exist motives over Q with motivic Galois groups of type E_8 or G_2; the other is whether there are Galois extensions of Q with Galois groups E_8(p) or G_2(p) (the finite simple groups of Lie type). The answers to both questions are YES. No familiarity with either motives or geometric Langlands or E_8 will be assumed. ### Mon, Nov 28: Burglind Joricke (Institut Fourier, Grenoble) "Analytic knots, satellites and the 4-ball genus" After introducing classical geometric knot invariants and satellites I will concentrate on knots or links in the unit sphere in $\mathbb C^2$ which bound a complex curve (respectively, a smooth complex curve) in the unit ball. Such a knot or link will be called analytic (respectively, smoothly analytic). For analytic satellite links of smoothly analytic knots there is a sharp lower bound for the 4-ball genus. It is given in terms of the 4-ball genus of the companion and the winding number. No such estimate is true in the general case. There is a natural relation to the theory of holomorphic mappings from open Riemann surfaces into the space of monic polynomials without multiple zeros. I will briefly touch related problems. ### Tue, Nov 29: Isaac Goldbring (UCLA) "Nonstandard methods in Lie theory" Nonstandard analysis is a way of rigorously using "ideal" elements, such as infinitely small and infinitely large elements, in mathematics. In this talk, I will survey the use of nonstandard methods in Lie theory. I will focus on two applications in particular: the positive solution to Hilbert's fifth problem (which establishes that locally euclidean groups are Lie groups) and nonstandard hulls of infinite-dimensional Lie groups and algebras. I will also briefly discuss the recent work of Breuillard, Green, and Tao (extending work of Hrushovski) concerning the classification of approximate groups, which utilizes nonstandard methods and the local version of Hilbert's fifth problem in an integral way. I will assume no prior knowledge of nonstandard analysis or Lie theory. ### Wed, Nov 30: Bing Wang (Simons Center for Geometry and Physics) Uniformization of algebraic varieties For algebraic varieties of general type with mild singularities, we show the Bogmolov-Yau inequality holds. If equality is attained, then this variety is a global quotient of complex hyperbolic space away from a subvariety. ### Mon, Dec 5: Dima Arinkin (UNC-Chapel Hill) "Autoduality of Jacobians for singular curves" Let C be a (smooth projective algebraic) curve. It is well known that the Jacobian J of C is a principally polarized abelian variety. In other words, J is self-dual in the sense that J is identified with the space of topologically trivial line bundles on itself. Suppose now that C is singular. The Jacobian of C parametrizes topologically trivial line bundles on C; it is an algebraic group which is no longer compact. By considering torsion-free sheaves instead of line bundles, one obtains a natural singular compactification J' of J. In this talk, I consider (projective) curves C with planar singularities. The main result is that J' is self-dual: J' is identified with a space of torsion-free sheaves on itself. This autoduality naturally fits into the framework of the geometric Langlands conjecture; I hope to sketch this relation in my talk. ### Wed, Dec 7: Toan Nguyen (Brown University) "On the stability of Prandtl boundary layers and the inviscid limit of the Navier-Stokes equations" In fluid dynamics, one of the most classical issues is to understand the dynamics of viscous fluid flows past solid bodies (e.g., aircrafts, ships, etc...), especially in the regime of very high Reynolds numbers (or small viscosity). Boundary layers are typically formed in a thin layer near the boundary. In this talk, I shall present various ill-posedness results on the classical Prandtl boundary-layer equation, and discuss the relevance of boundary-layer expansions and the vanishing viscosity limit problem of the Navier-Stokes equations. I will also discuss viscosity effects in destabilizing stable inviscid flows. ### Dec 9: Xinwen Zhu (Harvard University) "Adelic uniformization of moduli of G-bundles" It is well-known from Weil that the isomorphism classes of rank n vector bundles on an algebraic curve can be written as the set of certain double cosets of GL(n,A), where A is the adeles of the curve. I will introduce such presentation in the world of algebraic geometry and discuss two of its applications: the first is the Tamagawa number formula in the function field case (proved by Gaitsgory-Lurie), which is a formula for the volume of the automorphic space; and thesecond is the Verlinde formula in positive characteristic, which is a formula for the dimensions of global sections of certain line bundles on the moduli spaces. ### Mon, Dec 12: Jonathan Hauenstein (Texas A&M) "Numerical solving of polynomial equations and applications" Systems of polynomial equations arise in many areas of mathematics, science, economics, and engineering with their solutions, for example, describing equilibria of chemical reactions and economics models, and the design of specialized robots. These applications have motivated the development of numerical methods used for solving polynomial systems, collectively called Numerical Algebraic Geometry. This talk will explore fundamental numerical algebraic geometric algorithms for solving systems of polynomial equations and the application of these algorithms to problems in arising engineering and mathematical biology.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8082065582275391, "perplexity": 1463.5490749868018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00558.warc.gz"}
https://www.physicsforums.com/threads/forced-oscillation-q.216899/
# Forced Oscillation Q 1. Feb 21, 2008 ### ~christina~ [SOLVED] Forced Oscillation Q 1. The problem statement, all variables and given/known data A 2.00kg object attatched to a spring moves without friction and is driven by an external force given by $$F= (3.00N)sin(2 \pi t)$$ The force constant of the spring is 20N/m. Determine a) period b) amplitude of motion 2. Relevant equations $$T= 2 \pi / \omega$$ $$A= (F_o /m)/ \sqrt{ (\omega^2- \omega_o^2)^2 + (b \omega /m)^2}$$ 3. The attempt at a solution a) $$T= 2 \pi / \omega$$ $$T= 2 \pi/ 2 \pi = 1 s$$ b) Um..$$A= (F_o /m)/ \sqrt{ (\omega^2- \omega_o^2)^2 + (b \omega /m)^2}$$ however I'm not sure as to what the $$F= (3.00N)sin(2 \pi t)$$ $$F_o= 3.00N$$ $$\omega = 2 \pi$$ I'm not sure as to what that omega in the given equation is..is it $$\omega_o$$ or just the $$\omega$$ ? b= 0 so that cancels out.... Thanks alot 2. Feb 21, 2008 ### rl.bhat wo is the angular frequency of the driving force and w is the natural frequency of the oscillating spring. When w = wo the amplitude of the forced oscillation is maximum and that is the condition for the resonance. 3. Feb 21, 2008 ### ~christina~ but I was curious to know which one was given in the equation as I know it's written in my book that it is $$\omega$$ but I don't know which one. And after I know which one it is how do I find the other one since I want to find amplitude. 4. Feb 21, 2008 ### uart Actually the normal convention is the opposite way around, $$\omega_0$$ being the natural frequency. The natural frequency $$\omega_0$$ is equal to $$\sqrt{k/m}$$, where "k" is the spring constant. BTW. Obviously you're using a cookie cutter approach of substituting into "Relevant" equations so I'm guessing that a first principle appraoch of solving the systems differential equation is beyond the scope of your current course. I should however point out that the way you are solving this system is fundementally flawed in that with a truely frictionless systems the natural response cannot be ingored for any value of time t. Your "relevent equations" are only finding the particular solution and ingoring the homogenious solution which depends upon the initial conditions. Actually your approach would however be valid as a good approximation if your system is presented as a near frictionless system in steady state (note that a true frictionless system never actually reaches steady state). BTW. The truely relevent equation for this sytem is the DE: $$m \,\,d^2x/dt^2 + k x= A sin(wt)$$ Last edited: Feb 21, 2008 5. Feb 22, 2008 ### ~christina~ b) amplitude of motion $$A= (F_o /m)/ \sqrt{ (\omega^2- \omega_o^2)^2 + (b \omega /m)^2}$$ however I'm not sure as to b= 0 so that cancels out.... ====> this IS supposed to cancel out right?? I can't figure the correct answer out.... $$\omega = \omega_o$$ thus they SHOULD cancel out thus = 0 However I said that since the damping was small then shouldn't (frictionless) b= 0 as well??? But if this above is true then the bottom of the Amplitude eqzn would cancel out since it would = 0 + 0??? Then in the end wouldn't the equation be just $$A= (F_o /m)/ \sqrt{ (\omega^2- \omega_o^2)^2 + (b \omega /m)^2}$$ $$A= F_o/m$$ (after cancelling out everything?) Then A= 3/2= 1.5 => wrong.... Okay so then I think that I actually plug in for omega's and b is I assume 0 since damping is nonexistant(frictionless) but after plugging in $$\omega= sqrt{ k/m} = sqrt {20.0N/m / 2.00kg}= 3.1623$$ and $$\omega = 2 \pi$$ from equation given and what do I get?? $$A= (F_o /m)/ \sqrt{ (\omega^2- \omega_o^2)^2 + (b \omega /m)^2}$$ $$A= (3.0N /2.00kg)/ \sqrt{ ((2 \pi )^2- (3.1623))^2 + (0)^2}$$ $$A= 1.5/ 5.429 = .27629m => 0.0027629m$$ Which is ALSO NOT the answer :uhh: THANKS VERY MUCH Last edited: Feb 22, 2008 6. Feb 22, 2008 ### rl.bhat Natural frequency of the sping is given by wo^2 = k/m = 20/2 = 10 Angular frequency of the driving force =w=2*pi. Substitute these values in the expression for amplitude( put b = 0 ), you will get the answer. 7. Feb 22, 2008 ### ~christina~ I got it THANKS rl.bhat Similar Discussions: Forced Oscillation Q
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9697301983833313, "perplexity": 1264.0989183442807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191405.12/warc/CC-MAIN-20170322212951-00587-ip-10-233-31-227.ec2.internal.warc.gz"}
http://specialfunctionswiki.org/index.php/Elliptic_function
# Elliptic function A function $f$ is called elliptic if it is a doubly periodic function and it is meromorphic. # Properties Theorem: A nonconstant elliptic function has a fundamental pair of periods. Proof: Theorem: If an elliptic function $f$ has no poles in some period parallelogram, then $f$ is a constant function. Proof: Theorem: If an elliptic function $f$ has no zeros in some period parallelogram, then $f$ is a constant function. Proof: Theorem: The contour integral of an elliptic function taken along the boundary of any cell equals zero. Proof: Theorem: The sum of the residues of an elliptic function at its poles in any period parallelogram equals zero. Proof: Theorem: The number of zeros of an elliptic function in and period parallelogram equals the number of poles, counted with multiplicity. Proof:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888992428779602, "perplexity": 375.72121895913034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945648.77/warc/CC-MAIN-20180422193501-20180422213501-00108.warc.gz"}
http://mathhelpforum.com/advanced-algebra/182423-reasons-two-inclusions.html
# Thread: Reasons for two inclusions 1. ## Reasons for two inclusions I am confused at to why in this solution you have to take two inclusions, doesnt the first half of the solution prove everything. ( ie the bit where you take a linear combination of S). http://www.mth.kcl.ac.uk/courses/cm222/sol1.pdf 2. which problem are you referring to? 3. sorry q5 4. in general, to show two sets A,B are equal, showing A is a subset of B is not sufficient. B might be bigger. in your problem, showing that linear combinations of S (elements of span(S)) all have the property that the sum of the entries are 0, doesn't mean that you have accounted for every single matrix of that form. you also need to show that if the sum of the 2x2 marix enties are all 0, you can ALWAYS write the matrix as an element of span(S). so given the matrix A: [a b] [c d], with a + b + c + d = 0, you have to FIND real numbers α1, α2, α3, α4, α5, α6 with: α1A1 + α2A2 + α3A3 + α4A4 + α5A5 + α6A6 = A. if you can always do this, then the set of such matrices A is a subset of span(S). (in point of fact, you only need 3 elements of S. you might suspect this from seeing that if you pick b,c,d at will, a+b+c+d = 0 forces you to pick -b-c-d for a. that is we can pick a subset of S with the same span, because S is linearly dependent).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8607088327407837, "perplexity": 536.7772954986856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320264.42/warc/CC-MAIN-20170624152159-20170624172159-00301.warc.gz"}
https://www.ic.sunysb.edu/Class/phy141md/doku.php?id=phy142kk:lectures:16
# Lecture 15 - Magnetic Force ## Magnitude of magnetic field For a wire of length $\vec{l}$ which lies within a magnetic field $\vec{B}$ and carries a current $I$ $\vec{F}=I\vec{l}\times\vec{B}$ or in the diagram below $F=IlB\sin\theta$ We can also chop the length up in to infinitesimal pieces which produce infinitesimal forces to accommodate a wire that changes it's direction with respect to a magnetic field, or a non-uniform magnetic field. $d\vec{F}=I\,d\vec{l}\times\vec{B}$ ## Measuring a field using the force In the situation above the net force on the wire is from the length $l$ at the bottom of the loop and is of magnitude $F=IlB$ There are forces on the wires on the side of the loop but for a rigid loop these cancel out We can thus measure the magnetic field using $B=\frac{F}{Il}$ where the force can measured from the change in the apparent weight of the object. This is a fairly standard laboratory experiment. ## Force on a curved wire Let us now consider a similar situation, but one in which the bottom of the loop is a semicircle. The force $d\vec{F}=I\,d\vec{l}\times\vec{B}$ on the semicircular part of the wire is directed radially outwards. The magnitude is $dF=IBR\,d\phi$ But horizontal components force cancel each other out, so that the total force is $\int_{0}^{\pi}\,dF\sin\phi=\int_{0}^{\pi}IBR\sin\phi\,d\phi=IBR\int_{0}^{\pi}\sin\phi\,d\phi=IBR[-\cos\phi]_{0}^{\pi}=2IBR$ The length of wire in the semicircle is $l=\pi R$ so as compared to the straight case the force is reduced by a factor $\frac{2}{\pi}$. Note also that the answer is the same as for a “square” loop that is as wide as the semicircular loop. ## Force on a moving charge Electric charges in a wire feel a force but are not free to leave the wire, so the effect is force on the wire. Free electrons in a magnetic field also feel a force and are free to respond to it. In the same way as wire only feels a magnetic force when a current flows, charges only feel a magnetic force when they are moving. The force on the charge depends on the velocity. For a single charge $q$ $\vec{F}=q\vec{v}\times\vec{B}$ As before we can use a right hand rule to determine the direction of the force. We simply replace the current with the direction of the velocity of the charge. The direction of force reverses sign if the polarity of the charge is reversed. ## Path of a charged particle in a uniform magnetic field We can show that the path of a charged particle moving in a uniform electric field is a circle. As we will recall from when we studied uniform circular motion, a force perpendicular to the velocity changes the direction of the velocity only, not it's magnitude. The centripetal force is provided by the magnetic field $\frac{mv^{2}}{r}=qvB$ The radius of the circle is then $r=\frac{mv}{qB}$ We can see that we could use this to measure the ratio of the charge of an electron to it's mass, a classic experimental measurement, see here. The time it takes for an electron to go round the circle is $T=\frac{2\pi r}{v}=\frac{2\pi m}{qB}$ and the frequency, which we call the cyclotron frequency, is $f=\frac{1}{T}=\frac{qB}{2\pi m}$ You can visualize the trajectories here If you would like to try to build a cyclotron, take a look here, here or here for a few successful efforts. Look here for a recent conference of small cyclotron builders.. ## Helical Paths Velocity components not perpendicular to the field are not affected by it. Therefore a charged particle which is moving at some angle to the field will follow a helical path around the direction of the field. If the field strength increases the radius of the helical path decreases. In the case of charged particles traveling along the Earth's magnetic fields this leads to a concentration of charge particles near the poles which we see as Aurora. You can visualize these trajectories here. ## Lorentz Equation The Lorentz equation combines the electric force and the magnetic force on an charged particle $\vec{F}=q(\vec{E}+\vec{v}\times\vec{B})$ You can visualize these trajectories here. ## Torque on a current loop A current loop which can rotate around it's axis can experience a torque whenplaced in a uniform magnetic field. When the loop is aligned with the field the net torque experienced will be $\tau=IaB\frac{b}{2}+IaB\frac{b}{2}=IabB=IAB$ Where $A$ is the area of the loop. For $N$ loops the formula just becomes $\tau=NIAB$ However we can see that when the loop makes an angle $\theta$ with the field $\tau=NIAB\sin\theta$ ## Magnetic Dipole Moment A good way to represent the orientation dependence of the torque is to define a new vector quantity, the magnetic dipole moment $\vec{\mu}=NI\vec{A}$ The direction of the vector can be determined by the right hand rule and we may now write the torque as $\vec{\tau}=\vec{\mu}\times\vec{B}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893179714679718, "perplexity": 256.2370130181635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00534.warc.gz"}
http://ejde.math.txstate.edu/Volumes/2014/89/abstr.html
Electron. J. Diff. Equ., Vol. 2014 (2014), No. 89, pp. 1-10. ### Hamiltonians representing equations of motion with damping due to friction Stephen Montgomery-Smith Abstract: Suppose that is a Hamiltonian on a manifold M, and , the Rayleigh dissipation function, satisfies the same hypotheses as a Lagrangian on the manifold M. We provide a Hamiltonian framework that gives the equation The method is to embed M into a larger framework where the motion drives a wave equation on the negative half line, where the energy in the wave represents heat being carried away from the motion. We obtain a version of Nother's Theorem that is valid for dissipative systems. We also show that this framework fits the widely held view of how Hamiltonian dynamics can lead to the arrow of time.'' Submitted January 23, 2014. Published April 2, 2014. Math Subject Classifications: 70H25. Key Words: Hamiltonian; Lagrangian; Rayleigh dissipation function; friction; Nother's Theorem. Show me the PDF file (194 KB), TEX file, and other files for this article. Stephen Montgomery-Smith Department of Mathematics, University of Missouri Columbia, MO 65211, USA email: [email protected]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9531740546226501, "perplexity": 949.795842993793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660181.83/warc/CC-MAIN-20160924173740-00253-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/cond-mat/0501474/
# A Unified Theory of Consequences of Spontaneous Emission in a Λ System Sophia E. Economou    Ren-Bao Liu    L.J. Sham Department of Physics, University of California San Diego, La Jolla, California 92093-0319    D.G. Steel The H. M. Randall Laboratory of Physics, University of Michigan, Ann Arbor, MI 48109 July 8, 2022 ###### Abstract In a system with two nearly degenerate ground states and one excited state in an atom or quantum dot, spontaneous radiative decay can lead to a range of phenomena, including electron-photon entanglement, spontaneously generated coherence, and two-pathway decay. We show that a treatment of the radiative decay as a quantum evolution of a single physical system composed of a three-level electron subsystem and photons leads to a range of consequences depending on the electron-photon interaction and the measurement. Different treatments of the emitted photon channel the electron-photon system into a variety of final states. The theory is not restricted to the three-level system. ###### pacs: 78.67.Hc, 42.50.Md, 78.67.Hc, 42.50.Ct ## I Introduction The electromagnetic vacuum is commonly considered as a reservoir which causes decoherence and decay of a quantum mechanical system coupled to it. An alternative view holds that the two subparts (‘quantum system’ and ‘bath’) are constituents of a single closed quantum mechanical whole, which is governed by unitary evolution until a projection (measurement) is performed. Different projections may give rise to a variety of phenomena which on the surface appear unrelated. Spontaneous emission is a quantum phenomenon which has been treated in both ways. Its effects are of interest from the views of both fundamental physics and applications. The radiative decay of a three-level system is attractive for its simplicity and yet richness in physical phenomena. A variety of effects follow from the spontaneous decay. Those which involve semiclassical light and ensemble of atoms include the electromagnetically induced transparency Harris et al. (1990) and lasing without inversion Harris (1989). By definition, a system has two nearly degenerate ground states which are dipole-coupled to one excited state for optical transitions. We shall, for conciseness, refer to the states as electronic states in an atom or quantum dot. The decoherence and decay effects for a single system are relevant to quantum computating and information processing, for example in many implementation schemes Imamoğlu et al. (1999); Chen et al. (2004); Monroe et al. (1995); Brennen et al. (1999); Liu et al. (2004), which can be more practical than the direct excitation of the two-level system. A system initially in the excited state will eventually decay by the emission of a photon. This process may result in the entanglement of the system with the emitted photon. Recently, entanglement between the hyperfine levels of a trapped ion and the polarization of a photon spontaneously emitted from the ion was demonstrated experimentally Blinov et al. (2004). In quantum optics of the atom, coupling to the modes of the electromagnetic vacuum can contribute to coherence between atomic states, and such terms have been implicit in the textbook treatment of spontaneous radiative decay Cohen-Tannoudji et al. (1992) or indeed explicit in research papers Cardimona and Stroud (1983). In the early 90’s, it was pointed out that in a system the spontaneous decay of the highest state to the two lower ones may result in a coherent superposition of the two lower states Javanainen (1992). The conditions for this Spontaneously Generated Coherence (SGC) as presented in Ref. Javanainen, 1992 are that the dipole matrix elements of the two transitions are non-orthogonal and that the difference between the two frequencies is small compared to the radiative line-width of the excited state. The final example is the so-called two-pathway decay in which a -system –as opposed to a system– cannot exhibit quantum beats because the information on which decay path of the system is in principle available by detection of the atom, and therefore no beats are expected (p. 19 of Ref. Scully and Zubairy, 1997). All the phenomena listed above, when viewed separately, appear unrelated, if not downright contradictory. In fact, they stem from the same process, namely the radiative spontaneous decay of a -system. The primary purpose of this paper is to show how they naturally emerge from the same time evolved composite state of the whole system ( subsystem and the electromagnetic modes). From this treatment follow the conditions for each effect in terms of the electron-photon coupling and in terms of different ways of projecting the photon state by measurement. We also show how a change of symmetry of the system by the introduction of a perturbation may determine whether a SGC will occur or not. The second goal of this work is to analyze these effects in the solid state, where the two lower levels of the system are the spin states of an electron confined in a semiconductor quantum dot. For this system, SGC has been given a theoretical analysis and experimental demonstration Dutt et al. (2004), and we further propose here an experiment for the demonstration of spin-photon polarization entanglement. In our treatment, we distinguish between a single system and an ensemble for the various phenomena; in this context, we make a comparative study of the solid state and the atomic system. This paper is organized as follows: In section II we present the time evolution of the decay process which leads to the conditions for the occurrence of each of the listed phenomena. In section III we deduce a set of conditions on the symmetry of the system for SGC. Sections IV and V illustrate these conditions by specific examples from atomic and solid state systems, respectively. We also present the theory of the pump-probe experiment and derive the probe signal, which is altered by the SGC term (section VI). ## Ii Spontaneous Emission as Quantum Evolution Consider a single system in a photon bath with modes , where , being the wave vector and the state with the polarization vector . In the dipole and rotating-wave approximation, the Hamiltonian for the whole system is given by H = ∑kωkb†kbk+3∑i=1ϵi|i⟩⟨i|+∑k;i=1,2gikb†k|i⟩⟨3| (1) +∑k;i=1,2g∗ikbk|3⟩⟨i|, where destroys a photon of energy or frequency () and is the electronic state with energy or frequency . The coupling between the photon and the electron is , where is the dipole matrix element for the transition . The system is taken to be at in the excited level (which can be prepared by a short pulse), and the photon bath is in the vacuum state, i.e., the whole system is in a product state. For , the composite wavepacket can be written as |ψ(t)⟩≡c3(t)|3⟩|vac⟩+∑kc1k(t)|1⟩|k⟩+∑kc2k(t)|2⟩|k⟩, (2) where is the photon vacuum state. Evolution of this state is governed by the Schrödinger equation. By the Weisskopf-Wigner theory Weisskopf and Wigner (1930) of spontaneous emission Scully and Zubairy (1997), the coefficient is obtained by one iteration of the other coefficients: ∂tc3 = −iϵ3c3−∑k|g1k|2∫t0e−i(ϵ1+ωk)(t−t′)c3(t′)dt′ (3) −∑k|g2k|2∫t0e−i(ϵ2+ωk)(t−t′)c3(t′)dt′. Since the electron–photon coupling is much weaker than the transition energy in the system, the integrals in the equation above can be evaluated in the Markovian approximation, resulting in: ∂tc3≈−iϵ3c3−Γ312c3−Γ322c3, (4) where Γ3i=2∑k|g2k|2∫t0e−i(ϵi+ωk)(t−t′)dt′. (5) Thus, the solution is c3≈e−(iϵ3+Γ/2)t, (6) where is the radiative linewidth of the excited state. Furthermore, and are given by c1k≈−g1kϵ3−ϵ1−ωk−iΓ2[e−i(ϵ1+ωk)t−e−iϵ3t−Γ2t], c2k≈−g2kϵ3−ϵ2−ωk−iΓ2[e−i(ϵ2+ωk)t−e−iϵ3t−Γ2t]. In order to study the system in the subspace of the lower states, we take the limit . After the spontaneous emission process, the final state is a electron–photon wavepacket with the coefficients c1k≈−g1kϵ3−ϵ1−ωk−iΓ2e−i(ϵ1+ωk)t, (7) c2k≈−g2kϵ3−ϵ2−ωk−iΓ2e−i(ϵ2+ωk)t. (8) The state of a photon is specified by its propagation direction , polarization (), and frequency . So we can formulate the total wavepacket as ∑n,σ[g1σe−iϵ1t|1⟩|n,σ,f1(t)⟩+g2σe−iϵ2t|2⟩|n,σ,f2(t)⟩], where we have taken the coupling constants to be frequency-independent. In Eq. (II) is the pulse shape of the photon. From Eq. (7) and (8), we see that the photon wavepacket has a finite bandwidth; this point, which was first studied by Weisskopf and Wigner in their classic treatment of spontaneous emission Weisskopf and Wigner (1930), is reflected in the structure of . These functions have a central frequency equal to and a bandwidth equal to . As a consequence of the finite bandwidth, for a given propagation direction and polarization, the basis states are not orthogonal, the overlap between them being ⟨n,σ,fl|n,σ,fj⟩=iΓiΓ+ϵlj, (9) where . We should emphasize that the wavepacket formed in Eq. (II) does not rely on the Markovian approximation. In a full quantum kinetic description of the photon emission process, the wavepacket of the whole system would still have the same form, the central frequency and bandwidth of the pulses would be close to those found using the Markovian approximation, but the specific profile of would be different from those given by Eq. (7) and (8). The various phenomena (electron and photon polarization entanglement, SGC, and two-pathway decay) can all be derived from the wavepacket of Eq. (II). If the spontaneously emitted photon is not detected at all, we have to average over the ensemble of photons of all possible propagation directions to obtain the electronic state. This is the usual textbook treatment of spontaneous emission. However, if detection of an emitted photon leads to a knowledge that its direction of propagation is , then the (unnormalized) electron-photon wavepacket should be projected along that direction: ∑σ[g1σe−iϵ1t|1⟩|n0,σ,f1(t)⟩+g2σe−iϵ2t|2⟩|n0,σ,f2(t)⟩]. When the two transitions are very close in frequency, i.e., , the overlap of the two photon wavepackets deviates from unity by . After tracing out the envelopes of the photon by use of any complete basis (e.g. monochromatic states), the state of the electron and photon polarization is, with the propagation direction understood, |Υ⟩=√N∑σ[g1σ|1⟩|σ⟩+g2σ|2⟩|σ⟩]+O(η), (10) where is a normalization constant, given by N−1=∑j=1,2∑σ=α,β|gjσ|2. (11) The order error recorded here is meant to indicate the magnitude of the mixed-state error which, if neglected, results in a pure state. From this pure state, we can find explicitly the necessary conditions for entanglement or SGC. However, the approximation of neglecting is unnecessary for computing a measure of entanglement of the resultant mixed state Bennett et al. (1996). ### ii.1 Entanglement A measure of entanglement of the bipartite state in Eq. (10) is given by the von Neumann entropy of the reduced density matrix of the state Wootters (1998) for either the subsystem of the two low-lying electronic states or the subsystem of the photon polarization states. Taking the partial trace of the polarization states of the density matrix of the pure state leads to the reduced density matrix for the electronic states, ρE=N∑ij|i⟩[∑σgiσg∗jσ]⟨j|. (12) Diagonalization of this partial density matrix leads to two eigenvalues, p±=12±√14−D2, (13) where is the determinant of the reduce density matrix , or D=N|g1αg2β−g1βg2α|, (14) for the two electronic state and two polarizations, , normal to the propagation direction . The entropy of entanglement is given by the entropy, S=−p+log2p+−p−log2p− (15) As ranges from 0 to , the entropy ranges from 0 to 1 giving a continuous measure of entanglement as the state goes from no entanglement to maximum entanglement. To find the axis along which the entanglement is maximum, we have to maximize as a function of the orientation. For a particular system, this axis can be found in terms of the dipole matrix elements of the two transitions. However, not all systems can have maximally entangled states. We will apply this to specific examples in the following section. ### ii.2 Sgc From the reduced density matrix, we can also find the conditions for SGC. Maximum SGC occurs when the reduced density matrix is a pure state. In terms of the electron-photon coupling constants the condition is the vanishing of the discriminant in Eq. (14). This means that when the SGC effect is maximized, there exists a particular transformation which takes the basis of the electronic states to a basis which has the property that is always the final state of the -system immediately after the spontaneous emission process, and is a state disconnected from the excited state by dipolar coupling, i.e. a dark state. This point will be further explored in section III. The extreme values of and make it clear that maximum SGC means no entanglement and conversely that maximal entanglement leads to no SGC. However, partial entanglement can coexist with the potentiality of some SGC for values of between the two extremes. Our theory can be easily extended to systems with more than two ground states. For example, in a system whose ground states are the four states from two electron spins, the SGC may lead to the coherence and entanglement between the two spins, which is the mechanism of a series of proposals of using vacuum fluctuation to establish entanglement between qubits Plenio et al. (1999); Feng et al. (2003). ### ii.3 Two-pathway decay So far we have investigated the consequences when the two transitions are close in frequency (). When this is not the case, the tracing-out of the wavepacket will generally produce a mixed state in electron spins and photon polarizations. In the limit of large , i.e., , the overlap between the two photon wave functions, , and the reduced density matrix for the spin and photon polarization would be mixed. In this case there is neither spin-polarization entanglement nor SGC, but instead the time development can be described as a two-pathway decay process: the excited state can relax to two different states by the emission of photons with distinct frequencies. For between these two limits, the state in Eq. (II) may lead to an entanglement between the pulse shapes of the photon and the two lower electronic levels on measuring the photon polarization. Furthermore, from the entangled state in Eq. (II), SGC or polarization entanglement may still be recovered (provided of course that the necessary conditions on the ’s are satisfied) if the quantum information carried by the frequency of the photon is erased Kim et al. (1999). This can be done by chopping part of the photon pulse, and thus subjecting its frequency to (more) uncertainty. In a time-selective measurement, only photons emitted at a specific time period, say from to , are selected. So the projection operator associated with this measurement is , which represents a photon pulse passing the detector at . The projected state after this measurement ∑σ[g1σf1(to)|1⟩+g2σf2(to)|2⟩]|n0σ⟩ (16) is a pure state of the electron and photon-polarization, so that entanglement or SGC is restored. By writing the projector in the frequency domain ~Po=∫dω∫dω′ei(ω−ω′)to|ω′⟩⟨ω|, (17) we see that it can be understood as a broadband detector with definite phase for each frequency channel; thus it can erase the frequency (which-path) information while retaining the phase correlation. We note that a usual broadband detector without phase correlation is not sufficient to restore the pureness of the state. It is also interesting that SGC and entanglement can be controlled by choosing a different detection time , as seen from Eq. (16). ## Iii Symmetry Considerations for SGC In this section we investigate the symmetry relations between the different parts of the Hamiltonian necessary for SGC terms to appear. Our treatment is not restricted to systems, but can be extended to a system with more than two lower levels. Consider a quantum mechanical system with one higher energy level and a set of lower-lying states, described by a Hamiltonian . Taking into account only dipole-type interactions, denote by the polarization operator used in the selection rules. The axis is defined by the excited state via Jz|e⟩=Me|e⟩ Note that can be either , where is the total angular momentum operator and is the spin, or , as determined by the condition [Jz,Ho]=0. That is to say there is an axial symmetry in the system associated with . Among the lower lying states, the ones of interest are the ones appearing in the final entangled state of the whole system. We will refer to these states as ‘bright’, because they are orthogonal to the familiar dark states from quantum optics. There are at most three such states, , within a given degenerate manifold, corresponding to the three different possible projections of the dipole matrix elements along the axis, so . In general, not all systems will have all three bright states. This concept that the final state involves only a small number of states (three in our case), gives a physical understanding of the electron-photon entangled state Chan et al. (2002). In order to have SGC, i.e., one or more terms of the type , with and , there has to be a perturbation that breaks the symmetry associated with ; in particular, the following conditions have to be satisfied: 1. ; 2. ; 3. . In general, we expect SGC between two eigenstates of the Hamiltonian which have nonzero overlap with the same bright state. The role of the first condition is to make SGC non-trivial; without this condition, it would always be possible to rotate to a different basis and formally acquire an SGC-like term in the equations (e.g. by rotating to the basis in the zero magnetic field case in the heavy-hole trion system discussed below). The second condition ensures that the excited state will not mix under the action of ; relaxing this condition gives rise to the Hanle effect Cohen-Tannoudji et al. (1992); Scully and Zubairy (1997), in which an ensemble of atoms in a magnetic field is illuminated with an -polarized pulse and the reradiated light may be polarized along . This effect is another example where coherence plays an important role; it has recently been observed in doped GaAs quantum wells, in the heavy-hole trion system with confinement in one dimension Dzhioev et al. (2002). We shall discuss the quantum dot case below. As shown in Sec. II, when the radiative line-width of the excited state is smaller than the energy differences of the lower states the SGC effect will be averaged out. The third condition provides the valid regime for the occurrence of this phenomenon. The perturbation can be realized by a static electric or magnetic field, by the spin-orbit coupling, by hyperfine coupling, etc. Note the different origins of in different systems and that it may or may not be possible to control . Examples of various systems follow, exhibiting the above conditions and demonstrating the different origins of . ## Iv Examples from Atomic Physics ### iv.1 SGC in atoms Consider an atom with Hamiltonian ; excluding relativistic corrections, it can be diagonalized in the basis. Consider as the system of interest the subspace of formed by and the lower-energy states . The various quantum numbers are of course restricted by selection rules, and . Here we will list only the three bright states: |B1⟩=|N−1,2,1,2,1⟩ |B0⟩=|N−1,2,1,1,1⟩ |B¯1⟩=a|N−1,2,1,0,1⟩+b|N−1,0,1,0,1⟩ where the coefficients and can be determined in the following way: in the original basis, the matrix elements for the transitions and are given by the Wigner-Eckart theorem. By rotating to the basis, and requiring the transition to be forbidden, we find and . Inclusion of the spin-orbit interaction, which plays the role of , i.e. , condition (i) is satisfied, the eigenstates of being . Condition (ii) is also satisfied, because , as the state of maximum and , does not mix under the spin-orbit coupling. In the new basis, SGC is expected to occur between states with the same value of , which can also be verified by direct calculation. In this example the line-width of is much smaller than the spin-orbit coupling strength . Typical values in atoms are  eV and  meV, which means that SGC will not be observed in such a system. ### iv.2 Entanglement and SGC of atomic hyperfine states In this example, the system is formed by the hyperfine states of a single trapped Cd ion in the presence of a magnetic field along the axis. In the basis, the excited state is and the two lower levels are and . The two lower levels have the same principle quantum number . The entanglement between the polarization of the photon and the atom has been demonstrated experimentally Blinov et al. (2004). To illustrate the methods developed in Section II, we will make use of the fact that the two lower levels are states of definite angular momentum and its projection to the axis. Then, by the Wigner-Eckart theorem we know that the dipole moment of the transition has a nonzero component only along whereas that of has only a component along . The wavepacket of the system is then given by |Υ⟩=−√2sinϑ|ϑ⟩|11⟩+e−iφcosϑ|ϑ⟩|10⟩−ie−iφ|φ⟩|10⟩√2+sin2ϑ (18) where and are the spherical coordinates measured from and axis, respectively, and and are the polarization basis states, which are linearly polarized parallel and normal to the plane formed by the axis and the propagation direction, respectively. Then from Eq. (18), we read off the ’s: g1ϑ ∝ −√2sinϑ (19) g1φ = 0 (20) g2ϑ ∝ e−iφcosϑ (21) g2φ ∝ ie−iφ, (22) where and . The measure of entanglement by is D=√2sinϑ√2+sin2ϑ. (23) The maximum possible entanglement occurs at , i.e., whenever the photon propagates perpendicularly to . The maximum value of 0.47 is close to being maximally entangled. does not depend on , as expected since there is azimuthal symmetry about . In terms of SGC and symmetry, it is interesting to notice that the role of the (external or internal) field, , introduced in section III can be played by the different projections (measurements) because the state before the measurement is an eigenstate of the operator (total angular momentum along ) but not after the measurement in general. The magnetic field along the -axis is included in the Hamiltonian . If the spontaneously emitted photons are measured along the quantization axis, only the ones emitted from the transition will be detected, since only their polarization allows propagation along . On the other hand, a photon detector placed at a finite angle from can play the role of . Suppose a photon is spontaneously emitted along an axis . The density matrix of the state given by Eq. (18) is . If we are only interested in the dynamics of the ion, and the polarization of the photon is not measured, then the photon polarization has to be traced out. Then the reduced density matrix of the system, in the atomic states is (24) The off-diagonal elements express coherence between the hyperfine states with dependence on the photon propagation direction. We can check that for the probability of the atom being in the state is zero and there are no off-diagonal elements, and for the off-diagonal elements are also zero, which means there is no SGC, but the state has the maximum possible entanglement. For all the intermediate values of the hyperfine states and the photon polarization are entangled, and there is also some SGC when the photon is traced out. Maximum SGC occurs when is minimized; from Eq. (23) we see that it is zero for . This is expected anytime the one of the two transitions involves a linearly polarized photon, since the latter cannot propagate along the quantization axis. So, for this orientation the final state can only be . For intermediate angles, for instance , there is both entanglement and SGC involving both lower states, when the photon is traced out. Since SGC only occurs for particular photon propagation directions we could view it as ‘probabilistic’ SGC. ## V Examples from Solid State Physics ### v.1 Heavy-hole trion system in a magnetic field In the optical control of the electron spin in a doped quantum dot Chen et al. (2004), a static magnetic field is imposed in a fixed direction at an angle with respect to the propagation of the circularly polarized pulse along the growth direction of the dot, defined as the axis. The two eigenstates of the electron spin along the field direction and the intermediate trion (bound state of an exciton with the excess electron) state in the Raman process form a three-level system. The trion state of interest consists of a p-like heavy hole and a pair of electrons in the singlet state. The -factor in the plane of the heavy hole is approximately zero in magnetic fields up to 5 T Tischler et al. (2002) and the two electrons are in a rotationally invariant state. This means that the trion state, although it is spin polarized along , will not precess about a perpendicular -field. Therefore it can be described by the ‘good’ quantum numbers and its projection along , . The lower levels , are the eigenstates of the spin along the direction of the -field and have and respectively. To check if this system will have SGC, we will examine whether the conditions of Section III are satisfied. We take to be the Hamiltonian of the Q.D., with , the trion state described above, excited by light; , since the spin-orbit interaction is included in , and any component of the field along can also be included. is the contribution to the Hamiltonian due to the magnetic field along . Condition (i) is fulfilled since , and condition (ii) is obviously satisfied. The only bright state is the electron spin eigenstate, . For later use, we also define . Therefore we expect SGC between states and for any angle , and since the linewidth of the trion is large enough compared to the Zeeman splitting, SGC should moreover have a detectable effect. As a matter of fact, it has already been demonstrated experimentally for this system, and, to the best of our knowledge, it is the only direct observation of SGC Dutt et al. (2004). For this nonlinear pump-probe experiment, the inclusion of SGC into the equations causes the amplitude and the phase of the probe signal to depend on the Zeeman splitting. More details on how this dependence occurs will be presented in the following section. Although our discussion has focused on single systems, the experiment was carried out for an ensemble. In general, for an ensemble of equivalent non-interacting atoms, an average over the different axes would have to be performed. However, in this quantum-dot solid state system, there is a common axis for all the dots, since they are grown on the same plane (), and they have a relatively large in-plane cross-section as compared to their height. This is a clear advantage of the quantum dot ensemble over an ensemble of atoms. We can also analyze this system using the methods in Section II. To find the ’s, we need the dipole matrix elements. These can be found by writing |1⟩=cosψ2|↑⟩+sinψ2|↓⟩ (25) |2⟩=sinψ2|↑⟩−cosψ2|↓⟩ (26) Again, we will make use of the fact that and are angular momentum eigenstates along the axis, with the familiar selection rules. Only state has nonzero dipole matrix element with , , so that the transitions and have dipole matrix elements equal to and respectively. Then, for a photon emitted along , we find the couplings: g1ϑ = d+eiφcosϑcosψ2 (27) g1φ = d+ieiφcosψ2 (28) g2ϑ = d+eiφcosϑsinψ2 (29) g2φ = d+ieiφsinψ2, (30) so that the determinant is always zero, independently of . This means that the system in this configuration will never be entangled with the polarization of the photon, which, as we have seen, implies maximum SGC. The final state of the system is always , unentangled. Section VI gives an intuitive picture of this concept by the vector representation of (the mean value of) the spin. ### v.2 Light hole trion in Voigt configuration The spin-photon entanglement can be also realized in a quantum dot system by employing the light-hole trion state. The heavy and light hole excitons are split by the breaking of the tetrahedral symmetry of the bulk III-V compound. It might also be possible to make the light hole states lower in energy than the heavy holes. The magnetic field is pointing along the direction, so that the lower levels are the two eigenstates, and . The optical pulses used are such that the light hole trion polarized along the direction is excited. The excited state is a trion of a singlet pair of electrons and a light hole which is in the component of the state. The trion can thus be characterized by the state . We choose the state as the excited state of the system and denote it by . The transitions and involve a photon linearly polarized along () and one with elliptical polarization (), respectively Yablonovitch (2001). In particular, after has decayed, the state of the system is from Eq. (10), |Υ⟩=−1√6[|X⟩|−⟩+(2|Z⟩−i|Y⟩)|+⟩], (31) We assume a measurement which determines the propagation direction of the photon . Then the state becomes: |Υ⟩ = −1√2+3sin2ϑ[cosϑcosφ|ϑ⟩|−⟩ (32) −(2sinϑ+isinφcosϑ)|ϑ⟩|+⟩ −sinφ|φ⟩|−⟩−icosφ|φ⟩|+⟩]. Following the same procedure as in the trapped ion example, we find that the condition for maximum entanglement is ; the value of is then , maximal entnaglement. SGC will only occur when in Eq. (14) is less than 0.5 and it will be maximum for propagation along , which means that the electron will be in the state . For all other values of there will be both entanglement and SGC between the two energy eigenstates when the photon is traced out. The phenomena following the spontaneous radiative decay of this system are indeed very similar to the trapped ion case. In the solid state system there is no need to isolate a single dot in order to observe SGC since all dots are oriented in the same direction. For quantum information processing, entanglement between photon-polarization and spin has to be established in a quantum dot. So isolating and addressing a single dot is required. Experimentally, this requirement is arguably feasible Bonadeo et al. (1998). The system should be initialized at state (or ) and subsequently excited by -(or -) polarized light, so that only state gets excited. Other trion states, involving electrons in the triplet state and/or heavy holes, have an energy separation from large enough compared to the pulse bandwidth and so they can be safely ignored. Above we found that the state will be maximally entangled when the spontaneously emitted photon propagates along . When the optical axis is along , the spontaneously emitted photon may be distinguished from the laser photons by optical gating. As an alternative to the optical gating, to minimize scattered light the detector may be placed along , i.e., at . The value of is then 0.2, so that the entanglement will be significantly less than that along the optical axis, but should be measurable. The observation of the emitted photon and the measurement of its polarization can be made as in Ref. Blinov et al. (2004). By use of the pump-probe technique, the state of the spin will also be measured to show the correlation with the polarization of the photon. To overcome the probabilistic nature of the entanglement (as projection is needed) and to improve the quantum efficiency degraded by the scattering problem, cavities and waveguides may be employed to enhance and select desired photon emission processes Liu et al. (2004); Yao et al. (2004). ## Vi Pump-probe experiment for SGC detection in a quantum dot In this section we provide a theoretical analysis for the pump-probe experiment which explicitly demonstrated SGC Dutt et al. (2004). The system is the heavy-hole trion system introduced above. We present a treatment based on the idea that SGC may be viewed as a decay to one bright state which is a superposition of the eigenstates. The vector character of the mean value of the spin, which also helps develop intuition for the SGC effect, is employed and in fact it anticipates some of the theoretical results of the pump-probe measurements calculated by perturbative solution of the density matrix in the remainder of this section. ### vi.1 Geometrical picture of SGC As shown by Bloch Bloch (1946) and Feynman et al Feynman et al. (1957), an ensemble of two-level systems can be described by a rotating vector. This picture provides an intuitive understanding of the spin coherence generated by the optical excitation and spontaneous decay of the trion states. For simplicity, we will assume the short-pulse limit in this section. Regardless of the presence or absence of the magnetic field, there is freedom in the choice of the quantization direction, and it is convenient in this case to choose the spin eigenstates quantized in the growth () direction, and . The two trion states and have and -component and , respectively. The selection rules are such that a photon with helicity ( circular polarization) excites the electron or to the trion states or , respectively. We will consider a polarized pump, which excites spin-up electrons to the trion state , leaving the electron spin-polarized in the direction. Due to the selection rules, the trion state can only relax back to the spin up state by emitting a polarized photon, and after recombination, the electron remains unpolarized. Now let us consider a strong magnetic field, applied at with respect to the optical axis, . In this so-called Voigt configuration, the Zeeman states are quantized in the -direction and are energy eigenstates with energies , respectively, while the trion states can still be assumed quantized in the -direction [see Fig. 2 (b)]. Note that the low-lying states in foregoing sections are now denoted by the spin states, . In the short-pulse limit, the pulse spectrum is much broader than the spin splitting or, equivalently, the pulse duration is much shorter than the spin precession period, so the excitation process is virtually unaffected by the magnetic field: the polarized pump excites the electron to the trion state , leaving the electrons spin-polarized in the direction, as in the zero-field case [See Fig. 2 (c)]. The pulse generates coherence between the two eigenstates and , which is the conventional Raman coherence Leonhardt et al. (1987) generated by a pulse with a spectrum broad enough to cover both the near-degenerate transitions. The spin precesses in the magnetic field normal to the plane of precession with frequency . In other words, the state oscillates between the spin up and down states. The Raman coherence can be determined by the excitation-induced change of the population in the spin state , ρR↑↑(t)=−ρττ2[1+cos(2ωLt)e−γ2t], (33) where is the population of the trion state immediately after the excitation pulse, and is the damping rate of the spin polarization (due to spin dephasing and inhomogeneous broadening). On the other hand, when the system is in the trion state , the trion will relax by emitting a -polarized photon, leaving an electron spin-polarized in the direction, i.e., generating coherence between the two spin eigenstates (SGC). The trion decay can be treated as a stochastic quantum jump process with the jump rate . After the quantum jump, the evolution of the system can be described by a spin vector rotating under the transverse magnetic field. Thus, the spin polarization generated by the spontaneous emission during can be determined by dρSGC↑↑(t,t′) = ρττe−2Γt′2Γdt′2× (34) [1+cos(2ωL(t−t′))e−γ2(t−t′)]. The precessing spin vector is deformed by the accumulation of increments through the optical decay into a spiral curve [see Fig. 2 (d)]. The accumulated spin polarization due to the spontaneous emission is ρSGC↑↑(t)=∫t0dρSGC↑↑(t,t′)= ρττ2R[1−e−2Γt+2Γ2Γ−γ2−i2ωL(e−i2ωLt−γ2t−e−2Γt)]. For an initially unpolarized system, the total spin polarization in the direction after the action of the pump and the recombination process is given by ρ(2)↑↑=[ρR↑↑+ρSGC↑↑] =−ρττ2[(1+aΓ)e−2Γt+a0cos(2ωLt−ϕ)e−γ2t], (36) where aΓ ≡ 2Γ(2Γ−γ2)(2Γ−γ2)2+4ω2L, (37) a0 ≡  ⎷γ22+4ω2L(2Γ−γ2)2+4ω2L, (38) ϕ ≡ −arctan2Γ−γ22ωL−arctanγ22ωL. (39) As shown in Fig. 2 (d), SGC induces a phase shift of the spin coherence as compared to the Raman coherence. Note also the different amplitudes of the Bloch vectors in the case with and without SGC. We can see that if the recombination is much faster than the spin precession under the magnetic field, i.e., , SGC actually cancels the Raman coherence. This is not surprising since such a limit simply corresponds to the zero-field case. In the strong field limit where , the spin precession will average SGC to zero, which corresponding to the two-pathway decay discussed in Sec. II. From Eq. (34) it can be seen that at any specific time the trion relaxes to state , so, as shown in Sec. II, a time-selective measurement can recover the SGC from the incoherent two-pathway decay. Without such a projection, as the spin coherence generated at different time has different phaseshift, the time averaging [see Eq. (LABEL:rhosgc)] leads to the vanishing of the SGC. In a pump-probe experiment, what is measured is the differential transmission signal (DTS), i.e., the difference between the probe transmission with and without the pump pulse. In the same-circular polarization (SCP) pump-probe configuration, the probe measures the change in the population difference created by the pump, . Hence, the DTS is given by ΔTSCP∝(3+aΓ)e−2Γtd+a0cos(2ωLtd−ϕ), (40) where is the delay time between the pump and probe pulses. The DTS reveals the spin beatings and the SGC effect manifests itself in the dependence of the beat amplitude and phase shift on the strength of the magnetic field. The pump-probe experiment can also be done in the opposite circular polarization (OCP) configuration. The probe measures the change of population of the spin down state . The DTS in this case is proportional to, , i.e., ΔTOCP∝(1−aΓ)e−2Γtd−a0cos(2ωLtd−ϕ). (41) The spin beat has the opposite sign to the SCP case. Similar analysis shows that if either the pump or the probe pulse is linearly polarized there will be no spin beat in the DTS. ### vi.2 Perturbative solution of the probe signal The optical field of the pump and probe pulses can be written as E(t) = (e+E1++e−E1−)χ1(t)e−iΩ1t (42) + (e+E2++e−E2−)χ2(t−td)e−iΩ2(t−td), where the subscripts and denote the pump and probe pulses, respectively, and are the unit vectors of the -polarizations. The dipole operator is ^d=d(e+|τ⟩⟨∓|±e−|¯τ⟩⟨±|)+h.c.. Thus, in the rotating wave approximation, the Hamiltonian in the basis can be written in matrix form as H=⎡⎢ ⎢ ⎢ ⎢⎣−ωL0−d∗E∗+(t)−d∗E∗−(t)0ωL−d∗E∗+(t)+d∗E∗−(t)−dE+(t)−dE+(t)ϵg0−dE−(t)+dE−(t)0ϵg⎤⎥ ⎥ ⎥ ⎥⎦, (43) where is the energy of the trion states, and , , and denoting the spin-flip rate, the spin depolarizing rate, and the trion decay rate, respectively. The explicit equations for each element of the density matrix are ˙ρτ,+ = i[ρ,H]τ,+−Γρτ,+, (44) ˙ρτ,− = i[ρ,H]τ,−−Γρτ,−, (45) ˙ρ+,+ = i[ρ,H]+,+−γ1ρ+,++Γ(ρττ+ρ¯τ,¯τ), (46) ˙ρ−,− = i[ρ,H]−,−+γ1ρ+,++Γ(ρττ+ρ¯τ¯τ), (47) ˙ρ+,− = i[ρ,H]+,−−γ2ρ+,−+Γc(ρττ−ρ¯τ,¯τ), (48) ˙ρττ = i[ρ,H]ττ−2Γρττ, (49) ˙ρ¯τ,t = i[ρ,H]¯τ,t−2Γρ¯τ,t, (50) ˙ρ¯τ¯τ = i[ρ,H]¯τ¯τ−2Γρ¯τ¯τ. (51) The Markov-Born approximation for the system-photon has been employed. The term representing the spontaneously generated spin coherence due to the trion recombination is indicated by the suffix ; should be equal to . However, we singled out the SGC term so that can be artificially set to zero for a theoretical comparison between the results with and without the SGC effect. In the pump-probe experiment, the DTS corresponds to the third-order optical response. The absorption of the probe pulse is proportional to the work done by the probe pulse, and the DTS is Bloembergen (1996) ΔT ∝ −W(3)=−2R∫˙P(3)(t)⋅E∗2(t−td) (52) ≈ −2Ω2I∫~P(3)(ω+Ω2)⋅~E∗2(ω+Ω2)dω2π. The third-order optical polarization of the system can be calculated directly by expanding the density matrix according to the order of the optical perturbation P(3)=e+d[ρ(3)τ,−+ρ(3)τ,+]+e−d[ρ(3)¯τ,−−ρ(3)¯τ,+], (53) Thus, given the -polarized pump pulse, the third-order polarization in the SCP and OCP cases can be respectively calculated as Bloembergen (1996) P(3)SCP(t)=e+d[ρ(3)τ,−(t)+ρ(3)τ,+(t)], (54) P(3)OCP(t)=e−d[ρ(3)¯τ,−(t)−ρ(3)¯τ,+(t)]. (55) ### vi.3 Analytical results The density matrix can be calculated straightforwardly order by order with respect to the pulse. Taking the initial state of the system to be the equilibrium state . The result for the second-order spin coherence due to the pump pulse is: ~ρ(2)+−(ω)=+X1ρ(0)−ω−2ωL+iγ2∫+∞−∞χ∗1(ω′−ω)χ1(ω′)ω′−Δ1−ωL+iΓdω′2π ~ρ(2)+,−(ω)=−X1ρ(0)+ω−2ωL+iγ2∫+∞−∞χ1(ω′+ω)χ∗1(ω′)ω′−Δ1+ωL−iΓdω′2π ~ρ(2)+,−(ω)=+X1iΓcρ(0)±(ω−2ωL+iγ2)(ω+i2Γ)∫+∞−∞χ1(ω′+ω)χ∗1(ω′)ω′−Δ1±ωL−iΓdω′2π ~ρ(2)+,−(ω)=−X1iΓcρ(0)±(ω−2ωL+iγ2)(ω+i2Γ)∫+∞−∞χ∗1(ω′−ω)χ1(ω′)ω′−Δ1±ωL+iΓdω′2π, (56) where is the detuning, and is the circular degree of the pulse polarization. In the equation above, the first two terms correspond to the Raman coherence generated by the pump excitation Leonhardt et al. (1987), and the last two terms represent the spontaneously generated coherence. Obviously, for a linearly polarized pump, , no spin coherence is generated either by excitation or by recombination, so there will be no spin beats in DTS. In the short-pulse limit, the spin coherence after the pump and recombination can be approximately expressed as ρ(2)+,−(t) ≈ X1|χ1(Δ1)|2(Γc2Γ−γ2−2iωL−12) (57) ×e−i(2ωL−iγ2)(t−t1). This formula can be directly compared to the result obtained by the intuitive picture in Sec. VI.1. The physical meaning of the two terms in Eq. (57) is transparent: the first term is SGC, whose amplitude and phase shift depend on the ratio of the recombination rate to the Zeeman splitting, and the second term is just the optically pumped Raman coherence which in the short pulse limit is independent of the Zeeman splitting. Having obtained the second-order results, we can readily derive the third-order density matrix and, in turn, the DTS can be calculated by use of Eq. (52). In general, the DTS can be expressed as ΔT∝Acos(2ωLtd−ϕ)e−γ2td+Be−2Γtd+Ce−γ1td, (58) and the spin coherence amplitude and phase shift , the Pauli blocking amplitude , and the spin non-equilibrium population can all be numerically calculated and, in the short-pulse limit, can also be analytically derived as A ≈ |χ1(Δ1)|2|χ2(Δ2)|2X1X2 (59) × ⎷γ22+4ω2L(2Γc−γ2)2+4ω2L, ϕ ≈ −arctan(2Γc−γ22ωL)−arctan(γ22ωL), (60) B ≈ |χ1(Δ1)|2|χ2(Δ2)|2[I1I2+2I
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9437813758850098, "perplexity": 598.0598284182782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710870.69/warc/CC-MAIN-20221201221914-20221202011914-00599.warc.gz"}
https://www.physicsforums.com/threads/name-the-molecule.760164/
# Homework Help: Name the molecule 1. Jul 1, 2014 ### Lizzy 1. The problem statement, all variables and given/known data I have to name the particular molecule that i am looking at and fill in the blanks with that name. 2. Relevant equations Choose from: 2-methoxybutane, 3-butene, 1-butene, butanal, 2-chlorobutane, ethylmethyl ether, butylmethyl ether, or 2-butanone 3. The attempt at a solution The first thing that I did was identify the central molecule as 2-butanol, but i am no quite sure what the arrows indicate. Whether its the removal of the compound next to the arrow or the addition of it; if someone could identify the type of diagram and nudge me in the right direction that would be fantastic! #### Attached Files: • ###### 1233.jpg File size: 9.7 KB Views: 130 Last edited: Jul 1, 2014 2. Jul 1, 2014 ### Staff: Mentor It is an orgo way of stating "something reacting with things written above arrow yielding". 3. Jul 1, 2014 ### Lizzy Okay i get it thank you so much! so if i follow a) could be 1-betene, b) 2-chlorobutane, c) 2-butanone, d) 2-methoxybutane? please note that its not suppose to be Cu its suppose to be Cl! im sorrry.... 4. Jul 8, 2014 ### Lizzy Or do i have b) and c) backwards in my previous answer?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843996524810791, "perplexity": 3582.602289481318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589172.41/warc/CC-MAIN-20180716021858-20180716041858-00311.warc.gz"}
https://spmphysics.blog.onlinetuition.com.my/waves/wave-pattern-interference/
# Wave Pattern Interference ### Nodal Line and Anti-nodal Line 1. An anti-node is a point of maximum amplitude where constructive interference occurs. Whereas a node is point of minimum amplitude where destructive interference occurs. 2. The anti-nodal line joins all anti-node points. The nodal line joins all node points. #### Formula for Interference λ = Wavelength a = Distance between the two wave sources x = Distance between two successive anti-node lines or node lines D = Distance from the wave sources to the plane where x is measured.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964493453502655, "perplexity": 2918.44261854052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072366.31/warc/CC-MAIN-20210413122252-20210413152252-00093.warc.gz"}
http://mathhelpforum.com/differential-equations/195939-what-types-ode-these.html
# Math Help - What types of ODE are these? 1. ## What types of ODE are these? $(1+e^x)y dy +2(x+e^x)dy=0$ $xy^3dy-(x^4+y^4)dx=0$ The first one I am not sure at all about but is the second one homogenous with a substitution required of $v=\frac{y}{x}$ ? 2. ## Re: What types of ODE are these? Originally Posted by Paulo1913 $(1+e^x)y dy +2(x+e^x)dy=0$ $xy^3dy-(x^4+y^4)dx=0$ The first one I am not sure at all about but is the second one homogenous with a substitution required of $v=\frac{y}{x}$ ? For the first one I guess that there is a typo . Second one is non-exact differential equation so you should apply Integrating Factor Technique : Let's denote : $M=-(x^4+y^4) ~\text{and}~N=xy^3$ $\frac{\partial M}{\partial y}=-4y^3 ~ \text{and}~\frac{\partial N}{\partial x}=y^3$ Since : $\frac{\frac{\partial M}{\partial y}-\frac{\partial N}{\partial x}}{N}=\frac{-5y^3}{xy^3}=\frac{-5}{x}$ Integrating actor is : $u(x)=e^{-\int \frac{5}{x} \,dx}$ 3. ## Re: What types of ODE are these? Oh I see, thanks. For the first one, it should be a dx for the first term, not another dy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9341355562210083, "perplexity": 819.2646443146306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989331.34/warc/CC-MAIN-20150728002309-00300-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/help-with-this-problem-of-manometry.950674/
# Help with this problem of manometry • #1 50 0 ## Homework Statement A J-shaped tube has an uniform cross section and it contains air to atmosphere pression of 75 cmo of Hg. It is pours mercury in the right arm, this Compress the closed air in the left arm. Which is the heigh of the mercury's column in left arm when the right arm is full of mercury? Consider that in every moment temperature is constant and that the air is an ideal gas. Consider h1 = 0.25m and h2 = 2.25m 2. Relevant equations P= Patm + density*g*h P1*V1 = P2*V2 ## The Attempt at a Solution I have equalized the pressure at point 1 with the pressure at point 2, and this looks like this: Pressure at point 1 = Air density * gravity * (0.25-h) Pressure at point 2 = Atmospheric pressure + mercury density * gravity* (2.25-h) The atmospheric pressure I suppose is 75 cm Hg. From previous equations I would solve for "h" My question is if the approach is right? and how I could consider the density of the air? Could I solve the problem by applying theory of ideal gases? thanks for your help #### Attachments • 2.2 KB Views: 229 • 5.8 KB Views: 250 Related Biology and Chemistry Homework Help News on Phys.org • #2 kuruman Homework Helper Gold Member 8,726 2,131 Yes, you can use the ideal gas law, but you have to make some assumptions about how the mercury fills the tube. I suspect the amount of air trapped in the left arm originally occupied the entire left arm at atmospheric pressure. Your diagram does not show what h1 and h2 are. I assume they are the height of the mercury columns in the two arms. Correct? • #3 50 0 Yes, you can use the ideal gas law, but you have to make some assumptions about how the mercury fills the tube. I suspect the amount of air trapped in the left arm originally occupied the entire left arm at atmospheric pressure. Your diagram does not show what h1 and h2 are. I assume they are the height of the mercury columns in the two arms. Correct? NEW Kuruman thank you for your answer. h1 and h2 are the length of left and right arm respectively. So, how can I solve the problem with ideal gas law? thank you • #4 kuruman Homework Helper Gold Member 8,726 2,131 NEW Kuruman thank you for your answer. h1 and h2 are the length of left and right arm respectively. So, how can I solve the problem with ideal gas law? thank you So you are saying that the left arm has length $y$ of air and length $h_1-y$ of mercury and the right arm has length $h_2$ of mercury. Correct? You can use the ideal gas law to relate the final pressure of the air to the initial (atmospheric) pressure in the left arm. Note that the volume is proportional to the cross sectional area which does not change when the arm is partially filled with mercury. • #5 50 0 So I can write Patm hinitial = P2 (0.25-h) where h is the heigh of compressed air. Me doubt is what is hinitial? Can I Convert the 75cm Hg to cm of air? • #6 kuruman Homework Helper Gold Member 8,726 2,131 Me doubt is what is hinitial? Did you not tell me in post #3 that hinitial is the length of the left arm, 0.25 m? • #7 50 0 Did you not tell me in post #3 that hinitial is the length of the left arm, 0.25 m? Ok, thank you. So, am I saying something right if I said that P1 = Patm + air density * gravity* h1? Or could I assume that the P1 = Patm? • #8 kuruman Homework Helper Gold Member 8,726 2,131 I think it is safe to assume that p1 = patm. • Last Post Replies 1 Views 2K • Last Post Replies 4 Views 1K • Last Post Replies 3 Views 512 • Last Post Replies 9 Views 2K • Last Post Replies 1 Views 1K • Last Post Replies 17 Views 7K • Last Post Replies 10 Views 2K • Last Post Replies 2 Views 8K • Last Post Replies 5 Views 7K • Last Post Replies 10 Views 3K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9230082035064697, "perplexity": 995.3251252255451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370525223.55/warc/CC-MAIN-20200404200523-20200404230523-00060.warc.gz"}
http://www.thestudentroom.co.uk/showthread.php?t=2046693&page=4&page=4
You are Here: Home # A Summer of Maths Tweet Maths and statistics discussion, revision, exam and homework help. Announcements Posted on TSR launches Learn Together! - Our new subscription to help improve your learning 16-05-2013 IMPORTANT: You must wait until midnight (morning exams)/4.30AM (afternoon exams) to discuss Edexcel exams and until 1pm/6pm the following day for STEP and IB exams. Please read before posting, including for rules for practical and oral exams. 28-04-2013 • View Poll Results: Have you studied any Group Theory already? Yes, I did some Group Theory during/at my A-level. 24 34.78% No, but I plan to study some before Uni. 15 21.74% No, I haven't. 30 43.48% 1. Re: A Summer of Maths (Original post by electriic_ink) Let z=y'. Also quite easy: How many (real) solutions does have? Knowledge required: GCSE & below. Thanks (and none surely?) 2. Re: A Summer of Maths (Original post by Brit_Miller) Thanks (and none surely?) Yeah 3. Re: A Summer of Maths (Original post by 4ever_drifting) Love these kind of problems Just a note though - could you not observe that you need the product of the roots of the quadratic equation, which is given by c/a in 0=ax^2+bx+c, so you know the answer is 26 without having to find the roots then multiply them together? You could in this case because the quadratic simultaneous equations you get when substituting for X and when substituting for Y are the same. I preferred not to take that shortcut on the off-chance that substituting for Y generated a different quadratic than substituting for X. I think you could probably solve it geometrically as well, the method I posted was just the one I prefer to use. 4. Re: A Summer of Maths (Original post by DJMayes) At the Nottingham University Open Day there was a "Maths Trail" with several interesting questions on it. The questions ranged from requiring a working knowledge of arithmetic progressions and combinations to lowest common multiples and counting squares; and more emphasis was put on thinking about them than slogging through endless manipulation. I thought I'd share one with you. The question is of the kind that could be set in C1, but is an interesting one: A rectangle is inscribed inside a circle of radius 6 units such that each of the vertices of the rectangle lie on the circumference of the circle. Given that the perimeter of the rectangle is 28 units, what is the area? Required Knowledge: Spoiler: Show - Pythagoras' Theorem Hints: Spoiler: Show What information are you given? You have both the perimeter of the rectangle, and the radius of the circle it is inscribed inside. How can you relate these in terms of the length and width of the rectangle? Spoiler: Show Consider the diagonal of the rectangle Full Solution: Spoiler: Show Let X and Y represent the length and width of the rectangle. Using this and the perimeter we can automatically deduce an equation: 2X + 2Y = 28 Which simplifies to X + Y = 14 As we are told that the radius of the circle is 6, we also know that its diameter must be 12. This diameter forms the diagonal of the rectangle. From this, another equation can be deduced using Pythagoras' Theorem: X^2 + Y^2 = 144 We now have a set of simultaneous equations to solve. Re-arranging the first to leave Y in terms of X and substituting in the second equation leaves you with this: X^2 + (14 - X)^2 = 144 This expands to: X^2 + X^2 - 28X + 196 = 144 Which then simplifies to: 2X^2 - 28X + 52 = 0 or: X^2 - 14X + 26 = 0 You can then complete the square or use the quadratic formula to get the result: X = 7 + rt(23) or X = 7 - rt(23) Using this you can then confirm that Y has the same values (i.e. if X = 7 + rt(23) then Y = 7 + rt(23) and vice-versa) Now you know both side lengths, you simply multiply them together to get the area. The final answer is 26. Spoiler: Show This could be made a lot neater. Starting as you did, x + y = 14, and x^2 + y^2 = 144. Now see that (x+y)^2 = x^2 + y^2 + 2xy and so area = xy = 0.5(196 - 144) = 26 5. Re: A Summer of Maths (Original post by hassi94) Spoiler: Show This could be made a lot neater. Starting as you did, x + y = 14, and x^2 + y^2 = 144. Now see that (x+y)^2 = x^2 + y^2 + 2xy and so area = xy = 0.5(196 - 144) = 26 Spoiler: Show That's a very clever trick for doing it actually! Unfortunately I have an insistence on solving all simultaneous equations (Even linear ones) through substitution instead of subtraction or manipulation which prevents me from spotting things like this 6. Re: A Summer of Maths This one is a bit tricky, but I think it has something to teach; use hints if you get stuck. {*} Question: Let be a function with the property that there exists such that for all . Prove that is periodic. {**} Required: (A-level) Spoiler: Show A function is periodic, with period , if for every in its domain one has where is an integer. {***} Hints: Spoiler: Show (1) Spoiler: Show (2) Spoiler: Show How does look like? 7. Re: A Summer of Maths (Original post by hassi94) Spoiler: Show This could be made a lot neater. Starting as you did, x + y = 14, and x^2 + y^2 = 144. Now see that (x+y)^2 = x^2 + y^2 + 2xy and so area = xy = 0.5(196 - 144) = 26 wasnt this in the online lecture thing? 8. Re: A Summer of Maths (Original post by james22) What is the derivative of y=x^x? What is the derivatie and inverse of y=x^x^x^x^... (an infinite string of x's) Here x^x^x=x^(x^x) not (x^x)^x Also for what values of x does y exist? Some thoughts under cruel assumptions. Spoiler: Show Without any justification, I claim that . Maybe the inverse will be Now, note that , and hence, which gives as a final result 9. Re: A Summer of Maths I wonder how people on here find Analysis in general. Do you like it (assuming you have seen some), or do you think it is difficult to understand and apply? 10. Re: A Summer of Maths This one is a bit tricky, but I think it has something to teach; use hints if you get stuck. {*} Question: Let be a function with the property that there exists such that for all . Prove that is periodic. {**} Required: (A-level) Spoiler: Show A function is periodic, with period , if for every in its domain one has where is an integer. {***} Hints: Spoiler: Show (1) Spoiler: Show (2) Spoiler: Show How does look like? Didn't seem that hard. Spoiler: Show Let f(x)=F subbing x for x+w we obtain f(x+2w)=(2F-5)/(F-2) subbing x for x+w again we obtain f(x+3w)=(-3F+5)/(-F+1) subbing x for x+w again we obtain f(x+4w)=F=f(x) so clearly f(x)=f(x+4nw) for every n. Period is 4w. Thus we are done. Last edited by Blutooth; 03-07-2012 at 02:20. 11. Re: A Summer of Maths Some thoughts under cruel assumptions. Spoiler: Show Without any justification, I claim that . Maybe the inverse will be Now, note that , and hence, which gives as a final result You have the right inverse and are right in the simplification of the original equation. I cannot remember the derivative but your answer looks about right. Any luck with the values of x for which this converges? 12. Re: A Summer of Maths I wonder how people on here find Analysis in general. Do you like it (assuming you have seen some), or do you think it is difficult to understand and apply? I find analysis a bit boring to be honest. Perhaps because it is not my strong point by any means, I don't know. I prefer "even purer maths"... They seem to have more depth and involve more creativity. 13. Re: A Summer of Maths (Original post by Blutooth) ... Can you use a spoiler, please. Thanks. Spoiler: Show Can take the values or ? (Original post by james22) Any luck with the values of x for which this converges? I don't have a good reason to believe what I derived works, so I have only rough ideas. (Original post by Lord of the Flies) I find analysis a bit boring to be honest. I prefer "even purer maths"... They seem to have more depth and involve more creativity. I have studied a bit of it, and I find it interesting; some of Cauchy's stuff is amazing and I would say it is creative. However, I do get stuck from time to time -- I thought I understood uniform continuity two months ago, but now I have to read it again. 14. Re: A Summer of Maths Can you use a spoiler, please. Thanks. Spoiler: Show Can take the values or ? Spoiler: Show No, by considering the denominator of f(x+2w), f(x+3w) etc. orplugging f(x+w)=3 and working out the value of f(x). Last edited by Blutooth; 03-07-2012 at 02:11. 15. Re: A Summer of Maths {*} Question: The polynomial is irreducible over . i) By completing the square, show that is not irreducible over the set of real numbers. Hence, derive the Sophie Germain algebraic identity by starting from the left-hand side. ii) Evaluate {**} Required: Spoiler: Show A polynomial is said to be irreducible over a set if it cannot be factored into polynomials with coefficients from the given set. As an example, is irreducible over the set of rational numbers denoted by . Spoiler: Show 16. Re: A Summer of Maths (Original post by james22) You have the right inverse and are right in the simplification of the original equation. I cannot remember the derivative but your answer looks about right. Any luck with the values of x for which this converges? Spoiler: Show Domain and convergence clearly both and are undefined. and strictly increases strictly increases so is injective for and strictly increases for and strictly decreases for . Also, . Therefore is undefined for since it would return exactly two values. Additionally, since the maximum of occurs only for has exactly one solution so is injective for and is undefined for all Therefore convergence occurs when or If that is what you meant by convergence? Last edited by Lord of the Flies; 03-07-2012 at 09:04. 17. Re: A Summer of Maths Here's an easy one! Question Evaluate Required Spoiler: Show L'Hôpital's rule Last edited by Lord of the Flies; 03-07-2012 at 09:42. 18. Re: A Summer of Maths (Original post by Lord of the Flies) Spoiler: Show Domain and convergence clearly both and are undefined. and strictly increases strictly increases so is injective for and strictly increases for and strictly decreases for . Also, . Therefore is undefined for since it would return exactly two values. Additionally, since the maximum of occurs only for has exactly one solution so is injective for and is undefined for all Therefore convergence occurs when or If that is what you meant by convergence? That is what I mean by convergence. You are almost right, but it converges for all 0<x<e^(1/e) 19. Re: A Summer of Maths (Original post by james22) That is what I mean by convergence. You are almost right, but it converges for all 0<x<e^(1/e) Really? I don't see how... When returns two values, no? 20. Re: A Summer of Maths (Original post by Lord of the Flies) Here's an easy one! Question Evaluate Required Spoiler: Show L'Hôpital's rule Spoiler: Show Is it undefined? I make lim x->0 f'(x)/g'(x) 0/0, so obviously g'(x)=0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729385137557983, "perplexity": 1272.8463095218565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699798457/warc/CC-MAIN-20130516102318-00037-ip-10-60-113-184.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Kernel_trick
# Kernel trick For machine learning algorithms, the kernel trick is a way of mapping observations from a general set S into an inner product space V (equipped with its natural norm), without having to compute the mapping explicitly, because the observations will gain meaningful linear structure in V. Linear classifications in V are equivalent to generic classifications in S. The trick or method used to avoid the explicit mapping is to use learning algorithms that only require dot products between the vectors in V, and choose the mapping such that these high-dimensional dot products can be computed within the original space, by means of a kernel function. For $x, y$ on $S$, certain functions $K(x,y)$ can be expressed as an inner product (usually in a different space). K is often referred to as a kernel or a kernel function. The word kernel is used in different ways throughout mathematics. If one is insightful regarding a particular machine learning problem, one may manually construct $\varphi: S \to V$ such that $K(x,y) = \langle \varphi(x), \varphi(y) \rangle_V$ and verify that $\langle \cdot, \cdot \rangle_V$ is indeed an inner product. Furthermore, an explicit representation for $\varphi$ is not required: it suffices to know that V is an inner product space. Conveniently, based on Mercer's theorem, it suffices to equip S with one's choice of measure and verify that in fact, $K : S \times S \to \mathbb{R}$ satisfies Mercer's condition. Mercer's theorem is stated in a general mathematical setting with implications in the theory of integral equations. However, the general statement is more than what is required for understanding the kernel trick. Given a finite observation set S, one can select the counting measure $\mu(T) = |T|$ for all $T \subset S$. Then the integral in Mercer's theorem reduces to a simple summation $\sum_{i=1}^n\sum_{j=1}^n K(x_i, x_j) c_i c_j \geq 0$ for all finite sequences of points x1, ..., xn of S and all choices of real numbers c1, ..., cn (cf. positive definite kernel). Some algorithms that depend on arbitrary relationships in the native space would, in fact, have a linear interpretation in a different setting: the range space of $\varphi$. The linear interpretation gives us insight about the algorithm. Furthermore, there is often no need to compute $\varphi$ directly during computation, as is the case with support vector machines. Some cite this running time shortcut as the primary benefit. Researchers also use it to justify the meanings and properties of existing algorithms. The kernel trick was first published in 1964 by Aizerman et al.[1] Theoretically, a kernel matrix K must be positive semi-definite (PSD).[2] Empirically, for machine learning heuristics, choices of K that do not satisfy Mercer's condition may still perform reasonably if K at least approximates the intuitive idea of similarity.[3] Regardless of whether K is a Mercer kernel, K can still be referred to a "kernel". Suppose K is any square matrix, then $K^\mathrm{T}K$ is a PSD matrix. ## Applications It has been applied to several kinds of algorithm in machine learning and statistics, including: Commonly used kernels in such algorithms include the RBF and polynomial kernels, representing a mapping of vectors in $\mathcal{R}^n$ into a much richer feature space over degree-$d$ polynomials of the original variables:[4] $K(x,y) = (x^T y + c)^d$ where $c \geq 0$ is a constant trading off the influence of higher-order versus lower-order terms in the polynomial. For $d=2$, this $K$ is the inner product in a feature space induced by the mapping $\varphi(x) = \langle x_n^2, \ldots, x_1^2, \sqrt{2} x_n x_{n-1}, \ldots, \sqrt{2} x_n x_1, \sqrt{2} x_{n-1} x_{n-2}, \ldots, \sqrt{2} x_{n-1} x_{1}, \ldots, \sqrt{2} x_{2} x_{1}, \sqrt{2c} x_n, \ldots, \sqrt{2c} x_1, c \rangle$ The kernel trick here lies in working in an $\left(\binom{n+2}{2} = \frac{n^2+3n+2}{2} \right)$-dimensional space, without ever explicitly transforming the original data points into that space, but instead relying on algorithms that only need to compute inner products $\varphi(x)^T\varphi(y)$ within that space, which are identical to $K(x,y)$ and can thus cheaply be computed in the original space using only $n+1$ multiplications. ## References 1. ^ M. Aizerman, E. Braverman, and L. Rozonoer (1964). "Theoretical foundations of the potential function method in pattern recognition learning". Automation and Remote Control 25: 821–837. 2. ^ Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012) Foundations of Machine Learning, The MIT Press ISBN 9780262018258. 3. ^ http://www.svms.org/mercer/ 4. ^ http://www.cs.tufts.edu/~roni/Teaching/CLT2008S/LN/lecture18.pdf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926372230052948, "perplexity": 491.1163093472686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999651529/warc/CC-MAIN-20140305060731-00010-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.electro-tech-online.com/threads/how-to-measure-impedance-at-high-frequencies.96582/
How to measure Impedance at high frequencies Status Not open for further replies. Well-Known Member Agilent is a great source for tutorials on how to measure things. Measuring the impedance of a simple passive component at high frequencies is not a trivial thing, so some background is needed to understand how to do it and where you can go wrong. This reference is an online app note rather than a book, but I thought this link needed a good home, so here it is: http://www.electro-tech-online.com/custompdfs/2009/08/5950-3000.pdf The app note is not limited to RF frequencies, but also discusses audio frequency measurements relating to tranformers, batteries, and some other active and passive components. Last edited: Interesting. Thank you. JimB Thank you v much rezaify New Member Tank you very. Status Not open for further replies. Replies 17 Views 12K Replies 4 Views 13K Replies 111 Views 21K Replies 6 Views 7K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8205532431602478, "perplexity": 1393.7800413790221}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038878326.67/warc/CC-MAIN-20210419045820-20210419075820-00601.warc.gz"}
http://tex.stackexchange.com/questions/7192/defining-a-new-latex-environment-for-numbered-two-column-proofs
# Defining a new LaTeX environment for numbered two-column proofs I'm trying to write up problem sets for my programming languages class, and I want to create an environment for a certain style of proof. Proofs in this style will have two columns, with numbering on the left (starting at 1), like this: \begin{tabular}{rc|l} (1) & Prop A & Justification \\ (2) & Prop B & Justification \\ (3) & Prop C & From (1) and (2) \end{tabular} I would like the numbering to happen automatically, without my having to add a column (like eqnarray does, with numbering on the right), and in perfect world, I would like to be able to use references and labels to refer to conclusions, so that if I insert steps or move them around I won't have to renumber everything. Usage would hopefully look something like this: \begin{twoproof} Prop A & Justification \label{A} \\ Prop B & Justification \label{B} \\ Prop C & From \ref{A} and \ref{B} \end{twoproof} \newcounter{proofc} \newenvironment{twoproof}{ \tabular{@{\stepcounter{proof} \arabic{proofc}}c|l} }{ \endtabular } But not only will this not let me use \ref (I think), but whenever I try to use this environment I get this error: ! Missing \endcsname inserted. \csname\endcsname l.79 \begin{twoproof} I tried running \show\eqnarray to see if I could use that as a template, but didn't understand the output at all, and I'm not sure where to look for enlightenment. I will appreciate any help or direction that anyone might be able to provide! Edit2:: I figured this issue out. I tried to get clever and wrap my twoproof environments in \begin{displaymath} and \end{displaymath}, and I guess this environment doesn't play well with amstex, because when I changed the way I was defining twoproof and got rid of the displaymath, everything started working again. Edit: While Willie Wong's answer (thankfully) solved my problem with with the missing endcsname, when I try to run this example: \begin{twoproof} \label{step1} Prop A & Justification \\ \label{step2} Prop B & Justification \\ Prop C & Because \ref{step1} and \ref{step2} \end{twoproof} I run into this error: Package amsmath Error: Multiple \label's: label 'step1' will be lost. See the amsmath package documentation for explanation. Type H <return> for immediate help. ... l.56 \label {step2} Prop B & Justification \\ If anyone has any ideas about what might be causing this, any help will be greatly appreciated. Also, as long as I'm editing the question, does anyone know how to right-justify the numbering in the table? It's not a big deal, but I think it might look a little nicer. I've thought very briefly about setting up a custom \halign, but I don't think it's worth the effort. If there's a (comparatively) easy solution, though, I'd love to hear it. Thanks again! - I'm poking at your problem (unsuccessfully, so far), but allow me to offer a tip in the meantime: in general, use \refstepcounter{foo} to step the counter to which \label{bar}s refer. –  Antal S-Z Dec 16 '10 at 4:00 Something fragile strikes again! Try the following: \newcounter{proofc} \renewcommand\theproofc{(\arabic{proofc})} \DeclareRobustCommand\stepproofc{\refstepcounter{proofc}\theproofc} \newenvironment{twoproof}{\tabular{@{\stepproofc}c|l}}{\endtabular} There's a small bug in this though: you have to put \label as the first thing in your row to work. It won't work if you put in the end. So \begin{twoproof} \label{step1} Prop A & Justification \\ \label{step2} Prop B & Justification \\ Prop C & Because \ref{step1} and \ref{step2} \end{twoproof} - Why does the label have to come at the beginning? What's tabular doing that hides the \refstepcounter? –  Antal S-Z Dec 16 '10 at 4:02 Ahh this is perfect! Thank you for your help. This is also the first I've heard of fragile commands, so I'll have to see what I can find out about those. –  maths Dec 16 '10 at 4:32 @Antal: I don't know. My best guess is that it has something to do with the fact that each delimited entry in a tabular environment is put in a group. I hope someone more familiar with the TeX book can clarify. –  Willie Wong Dec 16 '10 at 11:55 There are some existing packages that may fit your needs, such as one of these packages for Fitch-style deductions or this one for Kalish-Montague style. I use those regularly. - You could also adapt listliketab. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8371155858039856, "perplexity": 1294.2653400976844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535924131.19/warc/CC-MAIN-20140901014524-00382-ip-10-180-136-8.ec2.internal.warc.gz"}
https://docs.bmc.com/docs/PATROL4Windows/51/uninstalling-patrol-for-windows-servers-758871400.html
# Uninstalling PATROL for Windows Servers To uninstall PATROL for Windows Servers, you can use the Windows Add/Remove Programs functionality or the installation utility that you used to install the product. Warning If you use a different version of the installation program to uninstall the product than the version that you used to install the product, you might remove files that are needed to perform uninstallation of other BMC Software products. The following procedures describe how to uninstall products from a Windows environment and all related log files. ## To uninstall individual products 1. From the Uninstall directory in your BMC Software product installation directory, double-click uninstall.exeto launch the installation utility in uninstall mode. Note As an option, you can launch the installation utility in uninstall mode by choosing Start > Settings > Control Panel > Add/Remove Programs and double-clicking BMC Software Tools in the Add/Remove Programs Properties dialog box. When installing on a Windows Server in application mode or with Citrix Metaframe installed, perform the following steps to launch the installation utility in uninstall mode: 1. From a command line, change to the directory where the installation utility is located and enter the following command to change to installation mode: change user/install 2. Change to the Uninstall directory and enter the following command to start the installation Web server: uninstall.exe -serveronly A message box is displayed that shows the URL to use to connect to the installation Web server. 3. On another computer with a browser, start the browser. 4. Connect to the installation Web server from the browser to start the installation utility by using the URL that is displayed in the message box. The Welcome window is displayed. Click Next. 2. Select the installation directory from which you want to remove a product, and click Next. 3. Select the product or products that you want to uninstall, and click Next. 4. Review your selections and click Uninstall. After the uninstallation is complete, a window is displayed that tells you whether the uninstallation was successful. ## To retain log files and configuration files This task describes how to uninstall the PATROL product but retain log files, which contain history for future analysis, and configuration files for redeployment. 1. Uninstall all products as described in To uninstall individual products. 2. Locate the uninstall.ctl file in the following directory. %PATROL_HOME%\Uninstall\Install\instdata 3. Open the uninstall.ctl file in a text editor, and edit the /BMC/Base variable to specify the name of the directory from which you removed the products in step 1. 4. Open a command line prompt. 5. Change to the following directory. %PATROL_HOME%\Uninstall\Install\ instbin 6. Enter the following command. thorinst.exe -uninstall path to control file -log path to log file -output path to output log file Use the following table to help determine the log file and output log file locations: Option Description Value -log Sends the log information to a standard log file. This file contains all installation status information. Any valid path and file name (with a .txt extension) If a space exists in the path, the entire path must be enclosed in quotation marks. -output Sends the log information to an output log file. This file contains all messages about the progress of the installation that are normally sent to standard output. Any valid path and file name (with a .txt extension) If a space exists in the path, the entire path must be enclosed in quotation marks. Example: If C:\Program Files\BMC Software is your product installation directory, you would change to the C:\Program Files\BMC Software\Uninstall\ Install\instbin directory and enter the following command: ``thorinst.exe -uninstall "C:\Program Files\BMC Software\Uninstall\Install\instdata\uninstall.ctl" -log Z:\NetworkLogs\MyLogs.txt -output Z:\NetworkLogs\MyLogs.out`` This action would remove all the installation files and directories except those that are used by the utility at the time the uninstallation was performed. Log files, configuration files, and user-modified files would also be retained. ## To uninstall all log files and configuration files This task describes how to remove all PATROL products and related log files and configuration files from your Windows computer. After these files have been removed, you cannot recover them unless you have made a back-up copy of the installation. 1. Uninstall all products as described in To uninstall individual products. 2. Locate the uninstall-all.ctl file in the following directory. %PATROL_HOME%\Uninstall\Install\instdata 3. Open the uninstall-all.ctl file in a text editor, and edit the /BMC/Base variable to specify the name of the directory from which you removed the products in step 1. 4. Open a command line prompt. 5. Change to the following directory. %PATROL_HOME%\Uninstall\Install\instbin 6. Enter the following command. thorinst.exe -uninstall path to control file -log path to log file -output path to output log file Use the following table to help determine the log file and output log file locations: Option Description Value -log Sends the log information to a standard log file. This file contains all installation status information. Any valid path and file name (with a .txt extension) If a space exists in the path, the entire path must be enclosed in quotation marks. -output Sends the log information to an output log file. This file contains all messages about the progress of the installation that are normally sent to standard output. Any valid path and file name (with a .txt extension) If a space exists in the path, the entire path must be enclosed in quotation marks. Example: If C:\Program Files\BMC Software is your product installation directory, you would change to the C:\Program Files\BMC Software\Uninstall\Install\instbin directory and enter the following command: ``thorinst.exe -uninstall "C:\Program Files\BMC Software\Uninstall\Install\instdata\uninstall-all.ctl" -log Z:\NetworkLogs\MyLogs.txt -output Z:\NetworkLogs\MyLogs.out`` This action would remove all installation files and directories. The files that were used to perform the uninstallation will be marked for deletion and will be removed when the computer on which the products were uninstalled is rebooted. Submitting... Thank you
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8407478332519531, "perplexity": 4284.006318032046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330233.1/warc/CC-MAIN-20190825130849-20190825152849-00008.warc.gz"}
https://community.bt.com/t5/BT-Infinity-Speed-Connection/Wholesale-Speed-Test/td-p/924314
Showing results for Do you mean Aspiring Expert Posts: 586 Registered: ‎27-12-2010 ## Wholesale Speed Test Occassionally I get this when clicking on the "Run Diagnostic Test" button. What does it mean? type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception ```java.lang.IllegalStateException: Cannot forward after response has been committed org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1078) org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:396) org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:232) org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913) org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462) javax.servlet.http.HttpServlet.service(HttpServlet.java:710) javax.servlet.http.HttpServlet.service(HttpServlet.java:803)``` note The full stack trace of the root cause is available in the Apache Tomcat/6.0.16 logs. Distinguished Guru Posts: 7,708 Registered: ‎01-01-2013 ## Re: Wholesale Speed Test It means what it says in the description. Their server-side java process has thrown an error. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ If you found this post helpful, please click on the star on the left If not, I'll try again Aspiring Expert Posts: 586 Registered: ‎27-12-2010 ## Re: Wholesale Speed Test ray_dorset wrote: It means what it says in the description. Their server-side java process has thrown an error. Well ray_dorset, I can read Why? Is it all to do with BT's server or has it anything to do wth the speed test interaction with the HH3? All the best Distinguished Guru Posts: 7,708 Registered: ‎01-01-2013 ## Re: Wholesale Speed Test It is all to do with BT's server. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ If you found this post helpful, please click on the star on the left If not, I'll try again Expert Posts: 462 Registered: ‎09-03-2010 ## Re: Wholesale Speed Test I think that they have fixed it this morning. I just tried doing one and it worked ok. This issue with their servers has been going on randomly for 5 years to my knowledge. I do wish they would fix it. Infinidim
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205359220504761, "perplexity": 1375.4279179882885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826907.66/warc/CC-MAIN-20160723071026-00305-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.r-bloggers.com/2019/08/how-to-do-mediation-scientifically/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Mediation analysis has been around a long time, though its popularity has varied between disciplines and over the years. While some fields have been attracted to the potential of mediation models to identify pathways, or mechanisms, through which an independent variable affects an outcome, others have been skeptical that the analysis of mediated relationships can ever be done scientifically. Two developments, one more scientific than the other, have led to a renewed popularity of mediation analysis. The first, less scientific reason is that software developments have made it easier than ever to fit mediation models. Drag-and-drop interfaces such as AMOS, or the menu-driven PROCESS add-on to SPSS, have meant that researchers can easily test different models – often atheoretically in a $$p$$-hacking exercise – and get results that seem prima facie believable. However, it is entirely possible, and very easy, to utilize this software without any understanding of the assumptions mediation analysis requires to yield a causal interpretation. It turns out that many models that have passed peer review and ended up in scientific journals have no causal interpretation whatsoever. Simply drawing boxes and arrows, then hitting “Run” in the software of choice, is not appropriate methodology. In the second, more scientific development, statisticians from computer science, epidemiology, psychology, and political science have separately, but in a complementary manner, brought mediation into the causal inference framework. This has not only illuminated the assumptions needed to call a mediation model “causal,” but it has also created tools that allow researchers to assess the sensitivity of their causal claims to unobserved confounders. The newer potential outcomes framework provides concrete definitions for mediated effects, which had been seriously lacking in the prior mediation literature. Does it make sense to study the effect of an intervention while holding the mediator constant? If so, how do we interpret holding the mediator constant when the mediator is affected by other confounders outside the researcher’s control? Or should we hold the treatment constant and determine the effect on outcomes when we vary the mediator, which we might prefer if intervening on the mediator is cheaper than the treatment? The real-world consequences of mediation have until recently been glossed over, but the potential outcomes framework allows us to consider different scenarios. This blog post will review the history of mediation prior to its incorporation into the potential outcomes framework, when the focus was on determining the appropriate standard error for indirect effects. It then describes how the meaning of an indirect effect is more nuanced than most applications have given it credit for. The subsequent section describes the potential outcomes notation and the assumptions underlying the interpretation of a mediation pathway as causal. The final section provides three examples of when the old-school approach to mediation works and, more importantly, when it fails. Throughout, the discussion will, unless indicated otherwise, assume a binary intervention, a single continuous mediator, and a continuous outcome that is modeled as a linear function of the intervention, mediator, and (possibly) confounders. This is the simplest and most familiar situation. However, it should become evident that even in this seemingly straightforward scenario there are enough complications to show the reader how much care is required in interpreting mediation models. Moving into the world of categorical outcomes, and especially multiple mediators that affect each other, only makes things more complex. ## A Brief History of Mediation Take the following simple and familiar mediation model. $$X$$ is a binary intervention that affects $$Y$$ both directly and indirectly by changing the value of the mediator $$M$$, which in turn affects $$Y$$. This DAG corresponds to the following linear models: \begin{align} \mathbb{E}[M] &= \gamma_0 + \gamma_1X \\ \mathbb{E}[Y] &= \beta_0 + \beta_1M + \beta_2X \end{align} Substituting for $$M$$ in the model of $$Y$$ yields: \begin{align} \mathbb{E}[Y] &= \beta_0 + \beta_1(\gamma_0 + \gamma_1X) + \beta_2X \\ & = (\beta_0 + \beta_1\gamma_0) + \beta_1\gamma_1X + \beta_2X \end{align} The substitution makes it clear that the direct effect of $$X$$ on $$Y$$ can be recovered as the term $$\beta_2$$ from a regression of $$Y$$ on $$X$$ and $$M$$. The indirect effect can be recovered as the product of $$\beta_1$$ from regressing $$Y$$ on both $$X$$ and $$M$$, times $$\gamma_1$$ from the regression of $$M$$ on $$X$$. Whereas the significance of the direct effect can be found in the usual regression output, the significance of the indirect effect needs an extra step. Three ways of testing the significance have been proposed. First, an oft cited 1986 article from Baron & Kenny suggested a three-step approach to assessing mediation: 1. Regress $$Y$$ on $$X$$ only. This first step establishes whether there is any relationship at all that could be mediated. 2. Regress $$M$$ on $$X$$ only. This establishes that there is a relationship between $$X$$ and the mediator. If $$M$$ is unresponsive to $$X$$, it cannot mediate. 3. Regress $$Y$$ on $$X$$ and $$M$$ simultaneously. If the previously significant relationship between $$Y$$ and $$X$$ disappears, the relationship is completely mediated. If the previously significant relationship is still significant, but the magnitude is diminished, the association is partially mediated. This approach still does not provide a standard error or confidence interval for the product term $$\beta_1\gamma_1$$. Sobel (1982) therefore proposed applying the delta method to the system of equations defined above by the separate regressions for $$Y$$ and $$M$$. Recall that the delta method calculates a variance based on a linear approximation of a statistic $$\theta$$. For a $$\theta$$ that is a function $$f$$ of two random variables $$a$$ and $$b$$, $Var_{\theta} = \nabla f(a,b)^{\prime}cov(a,b) \nabla f(a,b)$ For the indirect effect we have $\theta = f(\beta_1, \gamma_1) = \beta_1\gamma_1$ The gradient vector is: $\begin{bmatrix} \beta_1 \\ \gamma_1 \end{bmatrix}$ So our first-order variance approximation is: \begin{align} Var_{\gamma_1\beta_1} &= \begin{bmatrix} \beta_1 \gamma_1 \end{bmatrix} \begin{bmatrix} Var_{\beta_1} Cov_{\beta_1, \gamma_1} \\ Cov_{\beta_1, \gamma_1} Var_{\gamma_1} \end{bmatrix} \begin{bmatrix} \beta_1 \\ \gamma_1 \end{bmatrix} \\ & = \beta_1^2Var_{\beta_1} + \gamma_1^2Var_{\gamma_1} + 2\beta_1\gamma_1Cov_{\beta_1, \gamma_1} \end{align} When the models for $$M$$ and $$Y$$ are assumed to be independent, as is typically done, the covariance term is zero. The standard error is consequently estimated by taking the square root of the variance: $SE_{\beta_1\gamma_1} = \sqrt{\beta_1^2SE_{\beta_1}^2 + \gamma_1^2SE_{\gamma_1}^2}$ where $$SE_{\beta_1}$$ is the standard error for $$\beta_1$$ taken from the regression of $$Y$$ on $$X$$ and $$M$$, and $$SE_{\gamma_1}$$ is the standard error of $$\gamma_1$$ from the regression of $$M$$ on $$X$$. In large samples, the product divided by its Sobel-method standard error is distributed normally and can therefore be used to test for significance. The Sobel method is implemented in most software packages that perform mediation. However, it is based on an approximation that requires $$N$$ to be large enough for the sampling distribution to be normal. Because researchers do not always have the luxury of large samples, the third approach to assessing indirect effects is to rely on the nonparametric bootstrap. This method boils down to treating the sample as though it were a population, repeatedly sampling from the sample with replacement, calculating the product term on each bootstrap sample, and using the standard deviation of the resulting distribution as the standard error. ## The Troubles All of this is fine practice assuming the simple mediation model presented in the path diagram above holds. However, once confounders are introduced the situation becomes a bit more murky. And, for observational data, there are always confounders. A simple example taken from Pearl (2014) illustrates two problems created by confounders: direct effect estimates may be biased, and the meaning of direct/indirect effects is ambiguous. Take the following model: Here $$X$$ does not have any direct or indirect effect on $$Y$$ except through the backdoor path $$X \rightarrow M \leftarrow C \rightarrow Y$$, where $$C$$ affects both $$M$$ and $$Y$$. Assume $$X$$ and $$M$$ are both binary and the model equations have the following linear specifications, with coefficients of either 0 or 1: \begin{align} Y &= 0X + 0M + 1C \\ M &= 1X + 1C \end{align} The second equation written in terms of $$C$$ is, $C = M – X$ The direct effect estimate of $$X \rightarrow Y$$ we would like to recover should equal zero. Yet if we control for $$M$$, as per the Baron and Kenny approach, we end up instead with an estimate of $$-1$$. Why? Controlling for $$M$$ means that we change $$X$$ from zero to one while holding $$M$$ constant at, say, zero. In this scenario, with $$X = 0$$ and $$M = 0$$, we get: \begin{align} Y &= C \\ &= M – X \\ &= 0 – 0 \\ &= 0 \end{align} When we switch to $$X = 1$$, still holding $$M$$ constant at zero, we get: \begin{align} Y &= C \\ &= M – X \\ &= 0 – 1 \\ &= -1 \end{align} Thus, changing $$X$$ from 0 to 1 while holding $$M$$ constant gets an effect of $$0 – 1 = -1$$. $$M$$ is a function of both $$X$$ and $$C$$, so holding $$M$$ constant at zero would require $$C$$ to also change in order to maintain the equality $$M = X+C$$. To keep the structural model’s integrity, we would need to introduce a counterfactual world in which $$M$$ is forced to take on the value of zero regardless of $$C$$. This is equivalent to removing the $$C \rightarrow M$$ path, which is not the same as statistically controlling for $$M$$. Defining direct and indirect effects therefore needs to be based carefully on what we think we can intervene on given the model’s structural relationships. Are we interested in the effect of changing $$X$$ from, say, $$X = 1$$ to $$X = 0?$$ If so, we need to understand what “holding $$M$$ constant” means given the presence of confounders. Or are we interested in the effect of changing $$M$$, say from $$M = 1$$ to $$M = 0$$? Are we interested in both? Switching to a counterfactual framework will force us to be more explicit about what we are manipulating and how we report direct and indirect effects. It also forces us to be explicit in the assumptions necessary to identify (estimate from data) these effects. The next section turns to defining the different types of direct effects we can estimate along with more concrete definitions. ## Potential Outcomes and Mediation The counterfactual approach to treatment effects is now well-established for non-mediation models. The idea is that every subject has multiple potential outcomes, one that occurs if the treatment is received $$(X = 1)$$ and one that occurs if the treatment is not received $$(X = 0)$$. We refer to a single subject’s two potential outcomes as $$Y_0$$ for the outcome when $$X = 0$$ and $$Y_1$$ for the outcome when $$X = 1$$. If we observed both of these, the treatment effect for this subject could be calculated as $$Y_1 – Y_0$$. However, we only observe a treated subject receiving the treatment; the non-treatment is the unobserved counterfactual. Alternatively, we only observe the non-treated subject not receiving the treatment; the treatment outcome is the unobserved counterfactual. While we cannot identify the individual-specific treatment effect, it is easy to show that we can estimate the Average Treatment Effect (ATE) across all individuals in our sample. Mediation requires an expansion of the potential outcomes. We first have two counterfactuals for the mediator $$M$$ given treatment $$X$$: 1. The value of $$M$$ when $$X = 0$$, $$M_0$$. 2. The value of $$M$$ when $$X = 1$$, $$M_1$$. We have four counterfactuals for the outcome $$Y$$. 1. The value of $$Y$$ when $$X = 0$$ and $$M = M_0$$, $$Y_{0M_0}$$. 2. The value of $$Y$$ when $$X = 1$$ and $$M = M_1$$, $$Y_{1M_1}$$. 3. The value of $$Y$$ when $$X = 0$$ and $$M = M_1$$, $$Y_{0M_1}$$. 4. The value of $$Y$$ when $$X = 1$$ and $$M = M_0$$, $$Y_{1M_0}$$. Of these four, only one is observed for a given individual, and the third and fourth will never be observed for any individual. The latter two may seem odd to consider at first glance. However, given these potential outcomes, we can state the following definitions (Pearl, 2001): 1. The controlled direct effect (CDE): The change in $$Y$$ when switching from $$X = 0$$ to $$X = 1$$ if $$M$$ were set at the same value for everybody. 2. The natural direct effect (NDE): The change in $$Y$$ when switching from $$X = 0$$ to $$X = 1$$ if $$M$$ were set at $$M_0$$ for everybody. That is, the NDE is the direct effect if the mediator were forced to take on the value it would have, for each individual, in the absence of treatment ($$X = 0$$). 3. The natural indirect effect (NIE): The change in $$Y$$ when keeping $$X = 1$$ for everybody and changing $$M = 0$$ to $$M = 1$$. Using counterfactual notation: 1. CDE: $$Y_{1M_{m}} – Y_{0M_{m}}$$ 2. NDE: $$Y_{1M_0} – Y_{0M_0}$$ 3. NIE: $$Y_{0M_1} – Y_{0M_0}$$ Note that the $$M_m$$ subscript for the CDE means that the value of the CDE will change depending on the value we set for $$M$$. In classical mediation modeling, the “total” effect of an intervention is partitioned into the direct effect plus the indirect effect. The NIE and NDE are defined so that the total effect $$TE = NIE + NDE$$. In the special case of a linear model with no treatment by mediator interaction and no post-treatment confounders, a CDE can equal the NDE. When this special case does not obtain, the CDE will be different from the NDE and cannot be added to the NIE to get the total effects. Nonetheless, the benefit of the CDE, as the next section discusses, is that it requires fewer assumptions to estimate. Thus, the direct causal effect of an intervention under specific values of the mediator can be recovered even in cases when the causal indirect effect is not identified. Being based on unobserved counterfactuals, all three types of effects are not identified at the individual level. As with ATEs in traditional causal inference, we instead focus on the average CDE, NDE, and NIE, as these effects can be identified from the data we do observe given certain assumptions are met. The next section turns to describing these assumptions and how they lead to identification of direct and indirect effects. ## Assumptions As a point of reference, consider the following mediation model with confounders: Here $$X$$ is the treatment, $$Y$$ is the outcome, and $$M$$ is the mediator. $$C_1$$, $$C_2$$, and $$C_3$$ are vectors of confounders, which may or may not contain the same variables. From Vanderweele (2015), we will make the following assumptions: • A1: There is no unmeasured confounding of the treatment-outcome relationship. • A2: There is no unmeasured confounding of the mediator-outcome relationship. • A3: There is no unmeasured confounding of the treatment-mediator relationship. • A4: There is no mediator-outcome confounder that is affected by the exposure. Based on the figure, these assumptions correspond to the following: • A1: $$C_1$$ contains all confounders of the $$X \rightarrow Y$$ path, is observed, and is included in the model of $$Y$$. • A2: $$C_2$$ contains all confounders of the $$M \rightarrow Y$$ path, is observed, and is included in the model of $$Y$$. • A3: $$C_3$$ contains all confounders of the $$X \rightarrow M$$ path, is observed, and is included in the model of $$M$$. • A4: The only directed path from $$X$$ to $$M$$ does not pass through any intermediate variables. This would be violated, for example, if the path $$X \rightarrow C_2 \rightarrow M$$ were to exist. The (average) natural indirect effect is defined as: $NIE = \mathbb{E}[Y_{0M_1} – Y_{0M_0}\mid C] = \mathbb{E}[Y_{0M_1} \mid C] – \mathbb{E}[Y_{0M_0} \mid C]$ We thus have two terms, $$\mathbb{E}[Y_{0M_1} \mid c]$$ and $$\mathbb{E}[Y_{0M_0} \mid c]$$ that we hope to be able to write in terms of our observed data. Pearl (2001) provides proofs, based on standard expectation algebra, of how these assumptions lead to the identification of each term from observable data. \begin{align} (A)NIE &= \mathbb{E}[Y_{0M_1} \mid c] – \mathbb{E}[Y_{0M_0} \mid c] \\ &= \sum_m \mathbb{E}[Y \mid x = 0, m, c] \{P(m \mid x = 0, c) – P(m \mid x = 1, c)\} \end{align} This estimand is identified because all of the terms on the RHS of the equation can be recovered from the data. Pearl’s notation may be unfamiliar to researchers who have traditionally worked in the regression-based Baron and Kenny world. Vanderweele and Vansteelandt (2009) show how regression can be used to recover the corresponding terms. Assuming that all four assumptions are met and the correct functional form is linear, we can specify two equations: \begin{align} \mathbb{E}(M \mid X = x, M = m, C = c) &= \theta_0 + \theta_1x + \theta_2m + \theta_3xm + \theta_4^{\prime}c \\ \mathbb{E}(Y \mid X = x, M = m, C = c) &= \beta_0 + \beta_1x + \beta_2^{\prime}c \end{align} where $$c$$ is a vector of confounders and $$xm$$ is a possible interaction between the treatment and mediator. Given a change in treatment from $$x$$ to $$x^*$$, the treatment effects from a linear structural model are: \begin{align} CDE(m) &= (\theta_1 + \theta_3m(x – x^*)) \\ NDE &= (\theta_1 + \theta_3\beta_0 + \theta_3\beta_1x^* + \theta_3\beta_2^{\prime}c)(x – x^*) \\ NIE &= (\theta_2\beta_1 + \theta_3\beta_1x)(x – x^*) \end{align} As the examples below will show, these formulas reduce to the Baron and Kenny approach if there is no treatment by mediator interaction $$(\theta_3 = 0)$$ and all assumptions are met. Note, though, that Vanderweeele (2015) shows that it is generally not a good idea to assume no interaction is present. This is because: 1. The interaction term allows for a fuller picture of the causal mechanism. 2. Statistical tests that show non-significance of the interaction should not be trusted due to low power, which is often the case for interactions. 3. Even if the interaction term is not significant in a regression, its inclusion can sometimes boost the power to identify other significant effects. The assumptions A1-A4 deal with the ability to measure and control for confounders. While this is no different from any regression model, the problem is made more complicated by the fact that confounders can affect multiple paths. Omitted variable bias is consequently a bigger threat to the validity of mediation estimates than traditional single-equation regression models. Interestingly, the multi-equation set-up of mediation modeling provides a unique opportunity to both test the robustness of results to unmeasured confounders and quantify just how large the effect of those confounders would have to be to explain away the causal effect. The calculations are based on the observation that, in the absence of confounding, the errors from the mediation model and the outcome model should be uncorrelated (Imai, Keele, and Yamamoto, 2010). Tingley et al. (2014) have developed the R package mediation to perform the sensitivity analysis and present useful graphical summaries of the results. Vanderweele (2015) discusses an alternative sensitivity test that can also be applied to single equation models. Describing these in greater detail is beyond the scope of this already lengthy post, but the reader should consult the publications, as sensitivity analyses are becoming standard practice in mediation. Sensitivity analyses are great for testing the no-confounding assumptions A1-A3, but assumption 4 (no confounder of $$M \rightarrow Y$$ is affected by $$X$$) may be very strong. Indeed, it rules out using regression for models like the following: It is well known that conditioning an a post-treatment covariate can induce spurious correlations and should be avoided (Acharya, Blackwell, and Sen, 2016). Yet not controlling for $$C$$ means that the estimate for $$M_1 \rightarrow Y$$ will be biased. The upshot is that traditional regression modeling cannot be used to partition total effects into separate indirect and direct effects that have a causal meaning. A variation on this is the case when there are multiple, non-independent mediators, as in the following: We label $$M_2$$ now as a second mediator rather than a confounder because it is of theoretical interest. If there were no $$M_2 \rightarrow M_1$$ path, we could still proceed using regression modeling and calculating separate NIEs. However, the presence of the $$M_2 \rightarrow M_1$$ complicates things, and standard regression is either not an option or requires additional assumptions. This is discussed in several publications (see, for example, Imai and Yamamoto, 2013, and citations therein). On the other hand, it turns out that the CDE only requires assumptions 1 and 2. In the notation of Pearl’s SCMs, we avoid the issues of a post-treatment confounder of $$M \rightarrow Y$$ by simply disabling any arrows pointing at $$M$$ and setting its value to a constant, as though we were intervening on the observational measure ourselves. Though we only intervene in theory, the result of doing so is an estimate of the direct effect conditional on the value of $$M$$. $CDE = P(Z=z \mid do(X = x), do(M = m)) – P(Z=z \mid do(X = x^{\prime}), do(M = m))$ This result may seem unsatisfying for somebody interested in a statistical test of the indirect effect. However, as Acharya et al. (2016) discuss, the CDE can sometimes illuminate causal mechanisms on its own. First, a nonzero estimate of the CDE rules out complete mediation by the mediator. That is, there is some causal path other than the one that passes through the proposed mediator. Second, if we assume, as is often done, that there is no interaction between the treatment and the mediator, the CDE and the NDE are equal. This means that one can recover the NIE by subtracting out the CDE from the total effect. Recall, however, that Vanderweele (2015) argues in favor of always including such an interaction effect. In addition, although identified, regression cannot always be used to estimate the CDE. If assumption 4 is in fact violated, the effect is still identified, but a different approach to estimation is required (Vansteelandt, 2009), or much stronger assumptions are necessary (Imai & Yamamoto, 2013). ## Example 1 We’ll demonstrate the problems under three different scenarios. First, we’ll look at a case where the assumptions can be satisfied with appropriate controls and where there’s no interaction between the treatment and mediator. Here there are two confounders. $$C_1$$ confounds the association between $$X$$ and $$Y$$, and $$C_2$$ confounds the association between $$M$$ and $$Y$$. Without proper control, these violate assumptions 1 and 2. First, we’ll generate some data consistent with the model: \begin{align} \mathbb{E}(M \mid X = x, M = m, C_1 = c_1) &= \theta_0 + \theta_1x + \theta_2m + \theta_3c_1 + \theta_4c_2 \\ \mathbb{E}(Y \mid X = x, M = m, C_1 = c_1, C_2 = c_2) &= \beta_0 + \beta_1x + \beta_2c_1 \end{align} set.seed(12345) C1 <- rnorm(10000) # Generate first random confounder X <- rbinom(10000, 1, plogis(.8*C1)) # Generate random tx variable as a function of C1 C2 <- rnorm(10000) # Generate second random confounder M <- rnorm(10000, .8*X + .8*C2) # Generate mediator as function of tx and second confounder Y <- rnorm(10000, .8*X + .8*M + .8*C1 + .8*C2) # Model outcome tbl <- tibble( C1 = C1, C2 = C2, X = X, M = M, Y = Y) The data generating process assumed an NDE of 0.80 and an NIE of 0.64. Let’s fit these models: mod_y <- lm(Y ~ X + M + C1 + C2, data = tbl) %>% broom::tidy() %>% mutate_if(is.numeric, ~round(.x, 3)) %>% mutate(p.value = if_else( p.value < 0.001, true = "< 0.001", false = as.character(round(p.value, 3)))) mod_m <- lm(M ~ X + C2, data = tbl) %>% broom::tidy() %>% mutate_if(is.numeric, ~round(.x, 3)) %>% mutate(p.value = if_else( p.value < 0.001, true = "< 0.001", false = as.character(round(p.value, 3)))) kable(mod_y, align = c("l", rep("c", 4))) term estimate std.error statistic p.value (Intercept) 0.015 0.015 1.043 0.297 X 0.794 0.023 34.549 < 0.001 M 0.810 0.010 78.972 < 0.001 C1 0.808 0.011 74.832 < 0.001 C2 0.795 0.013 60.938 < 0.001 kable(mod_m, align = c("l", rep("c", 4))) term estimate std.error statistic p.value (Intercept) 0.017 0.014 1.199 0.231 X 0.767 0.020 38.988 < 0.001 C2 0.806 0.010 81.819 < 0.001 Since there is no interaction, $$\theta_3 = 0$$, the NDE formula is: \begin{align} NDE &= (\theta_1)(x - x^*) \\ &= (0.794)(1 - 0) \\ &= 0.794 \end{align} The estimate of the NIE is \begin{align} NIE &= (\theta_2\beta_1)(x - x^*)) \\ &= (0.81 \times 0.767)(1 - 0)) \\ &= 0.62127 \end{align} What do we get from the Baron and Kenny method? We can use the lavaan package to return the direct and indirect effects the usual way. The following are three incompletely specified models and one correct specification. library(lavaan) ## Warning: package 'lavaan' was built under R version 3.6.1 # Moodel without confounders mod_1 <- "Y ~ a*X + b*M M ~ X ind_fx := a*b" # Model with confounder 1 mod_2 <- "Y ~ a*X + b*M + C1 M ~ X ind_fx := a*b" # Model with confounder 2 mod_3 <- "Y ~ a*X + b*M + C2 M ~ X + C2 ind_fx := a*b" # Model with both confounders mod_4 <- "Y ~ a*X + b*M + C1 + C2 M ~ X + C2 ind_fx := a*b" Here’s a function to pull out just the direct and indirect effects estimates. get_fx <- function(df, model) { mod <- sem(model, data = df) %>% parameterEstimates() %>% filter(label %in% c("a", "ind_fx")) %>% as_tibble tibble(Effect = c("NDE", "NIE"), Estimate = pull(mod, est), p.value = pull(mod, pvalue)) %>% mutate(p.value = if_else( p.value < 0.001, true = "< 0.001", false = as.character(round(p.value, 3)))) %>% mutate(Estimate = round(Estimate, 3)) } Map over the models: list(No Confounders = mod_1, Confounder 1 = mod_2, Confounder 2 = mod_3, Both Confounders = mod_4) %>% map_dfr(~get_fx(tbl, .x), .id = "Model Type") %>% arrange(Effect) %>% kable() Model Type Effect Estimate p.value No Confounders NDE 1.086 < 0.001 Confounder 1 NDE 0.508 < 0.001 Confounder 2 NDE 1.373 < 0.001 Both Confounders NDE 0.794 < 0.001 No Confounders NIE 1.310 < 0.001 Confounder 1 NIE 0.612 < 0.001 Confounder 2 NIE 1.115 < 0.001 Both Confounders NIE 0.643 < 0.001 Clearly, appropriate control needs to be made for confounders. But you knew that, so let’s make this a little more complicated. ## Example 2 We’ll take the same model as before, but now we’ll violate assumption 4 by introducing an association between the treatment and the mediator-outcome confounder: The only change is that there is now one additional path, $$X \rightarrow C_2$$. We’ll generate the data: C1 <- rnorm(10000) # Generate first random confounder X <- rbinom(10000, 1, plogis(.8*C1)) # Generate random tx variable as a function of C1 C2 <- rnorm(10000, .8*X) # Generate second confounder as function of tx M <- rnorm(10000, .8*X + .8*C2) # Generate mediator as function of tx and second confounder Y <- rnorm(10000, .8*X + .8*M + .8*C1 + .8*C2) # Model outcome tbl <- tibble( C1 = C1, C2 = C2, X = X, M = M, Y = Y) The data generating process is again consistent with an NIE 0.64. The NDE is now 1.44, as it includes both the direct path of $$X \rightarrow Y$$ (.8) and the path $$X \rightarrow C_2 \rightarrow Y$$ (.8*.8 = .64). Fit the models the traditional way using lavaan. # Moodel without confounders mod_1 <- "Y ~ a*X + b*M M ~ X ind_fx := a*b" # Model with confounder 1 mod_2 <- "Y ~ a*X + b*M + C1 M ~ X ind_fx := a*b" # Model with confounder 2 mod_3 <- "Y ~ a*X + b*M + C2 M ~ X + C2 ind_fx := a*b" # Model with both confounders mod_4 <- "Y ~ a*X + b*M + C1 + C2 M ~ X + C2 ind_fx := a*b" Map over models. list(No Confounders = mod_1, Confounder 1 = mod_2, Confounder 2 = mod_3, Both Confounders = mod_4) %>% map_dfr(~get_fx(tbl, .x), .id = "Model Type") %>% arrange(Effect) %>% kable() Model Type Effect Estimate p.value No Confounders NDE 1.479 < 0.001 Confounder 1 NDE 0.919 < 0.001 Confounder 2 NDE 1.392 < 0.001 Both Confounders NDE 0.823 < 0.001 No Confounders NIE 1.758 < 0.001 Confounder 1 NIE 1.096 < 0.001 Confounder 2 NIE 1.127 < 0.001 Both Confounders NIE 0.664 < 0.001 Recall that the NDE is the change in $$Y$$ when we change from $$X = x$$ to $$X = x^*$$, holding $$M$$ at $$M_0$$ for everybody. When we set $$M = M_0$$ for everybody, what is left is the effect of $$X$$ not operating through the mediator. As seen in the table, we get a direct effect estimate of $$B_{x \rightarrow y} = .823$$ when we control for both confounders. Although this is consistent with how we generated the $$M \rightarrow Y$$ path, it is not the NDE. By the definition of the NDE (which we defined so that NDE + NIE = TE), the estimate should also include the path $$X \rightarrow C_2 \rightarrow Y$$. In other words, the estimate is not consistent with our definition and cannot be added to the NIE to get the total effect. So, you think, we can just estimate a different model without $$C_2$$ to get the NDE. But, as the table shows, now the estimate of the NIE is wrong. This is a classic case of omitted variable bias because the effect through the confounder is being added to the NIE estimate. (For what it’s worth, the NDE is also wrong.) There is one final approach that may save us and still allow us to get both the NIE and NDE. The NIE was correct in the model that controlled for both $$C_1$$ and $$C_2$$, we just didn’t get the correct NDE. We could, in theory, specify the SEM to also estimate the regression of $$C_2$$ on $$X$$ and calculate the indirect effect of $$X$$ on $$Y$$ through $$C_2$$. We then just add the $$X \rightarrow Y$$ and $$X \rightarrow C_2 \rightarrow Y$$ paths together to get the NDE. # Model with both confounders mod_5 <- "Y ~ a*X + b*M + C1 + c*C2 M ~ X + C2 C2 ~ d*X ind_fx_1 := a*b ind_fx_2 := c*d nde := a + c*d" parameterEstimates(sem(mod_5, tbl)) %>% filter(label == "nde") %>% dplyr::select(label, estimate = est, se, pvalue) %>% mutate_if(is.numeric, ~round(.x, 3)) %>% mutate(pvalue = if_else(pvalue <= 0, "< .001", as.character(pvalue))) %>% kable label estimate se pvalue nde 1.467 0.031 < .001 Now we have the correct answer. But notice a few things: 1. The NDE did not come out of the regression results, as is usually assumed, but had to be calculated separately on the basis of the SEM that included a $$X \rightarrow C_2$$ path. 2. With observational data, $$C_2$$ is almost certainly not a single variable but rather a vector of many variables. The manual process quickly grows tedious. 3. If $$C_2$$ contains variables that cannot be observed, the NIE will be biased. 4. If $$C_2$$ is actually a second mediator, the potential outcomes become much more laborious to derive. The upshot will be that the assumptions also become more onerous, and the model may not be identified for other parametric forms. ## Example 3 For the final example, we’ll return to the model from example 1. That is, we’ll remove the $$X \rightarrow C_2$$ path so we are not in the unfortunate situation of having a post-treatment confounder to contend with. However, we will introduce an interaction between the treatment and the mediator. The following code chunk gets us this model: C1 <- rnorm(10000) # Generate first random confounder X <- rbinom(10000, 1, plogis(.8*C1)) # Generate random tx variable as a function of C1 C2 <- rnorm(10000, .8*X) # Generate second random confounder M <- rnorm(10000, .8*X + .8*C2) # Generate mediator as function of tx and second confounder Y <- rnorm(10000, .8*X + .8*M + .8*X*M + .8*C1 + .8*C2) # Model outcome with x by m interaction tbl <- tibble( C1 = C1, C2 = C2, X = X, M = M, Y = Y) %>% mutate(MX = M*X) # For the lavaan software, need to create interaction variable explicitly The NIE and NDE are now ambiguous because the effect of $$M$$ on $$Y$$ depends on $$X$$, and the effect of $$X$$ on $$Y$$ depends on $$M$$. We can fit our models via regression and use the Vanderweele (2015) formulas to recover the estimates consistent with our definitions. mod_y <- lm(Y ~ X*M + C1 + C2, data = tbl) %>% broom::tidy() %>% mutate_if(is.numeric, ~round(.x, 3)) %>% mutate(p.value = if_else( p.value < 0.001, true = "< 0.001", false = as.character(round(p.value, 3)))) mod_m <- lm(M ~ X + C2, data = tbl) %>% broom::tidy() %>% mutate_if(is.numeric, ~round(.x, 3)) %>% mutate(p.value = if_else( p.value < 0.001, true = "< 0.001", false = as.character(round(p.value, 3)))) kable(mod_y, align = c("l", rep("c", 4))) term estimate std.error statistic p.value (Intercept) -0.001 0.015 -0.073 0.941 X 0.818 0.027 30.308 < 0.001 M 0.800 0.013 60.976 < 0.001 C1 0.798 0.011 74.831 < 0.001 C2 0.794 0.013 61.205 < 0.001 X:M 0.789 0.016 49.258 < 0.001 kable(mod_m, align = c("l", rep("c", 4))) term estimate std.error statistic p.value (Intercept) 0.005 0.014 0.374 0.708 X 0.811 0.021 38.042 < 0.001 C2 0.789 0.010 79.132 < 0.001 If we use the correct formulas, we get the following estimate for the NDE: \begin{align} NDE &= (\theta_1 + \theta_3\beta_0 + \theta_3\beta_1x^* + \theta_3\beta_2c_1)(x - x^*) \\ &= (0.818 + 0.789 \times 0.005 + 0.789 \times 0.811 \times 0 + 0.789 \times 0.789 \times 0)(1 - 0) \\ &= 0.821945 \end{align} The estimate of the NIE is \begin{align} NIE &= (\theta_2\beta_1 + \theta_3\beta_1x(x - x^*)) \\ &= (0.8 \times 0.811 + 0.789 \times 0.811 \times 1)(1 - 0)) \\ &= 1.288679 \end{align} But if we use the Baron and Kenny method, we get the following: # Model without confounders mod_1 <- "Y ~ a*X + b*M M ~ X ind_fx := a*b" # Model with confounder 1 mod_2 <- "Y ~ a*X + b*M + C1 M ~ X ind_fx := a*b" # Model with confounder 2 mod_3 <- "Y ~ a*X + b*M + C2 M ~ X + C2 ind_fx := a*b" # Model with both confounders mod_4 <- "Y ~ a*X + b*M + C1 + C2 M ~ X + C2 ind_fx := a*b" # Model with both confounders & interaction mod_5 <- "Y ~ a*X + b*M + MX + C1 + C2 M ~ X + C2 ind_fx := a*b" list(No Confounders = mod_1, Confounder 1 = mod_2, Confounder 2 = mod_3, Both Confounders = mod_4, Both Confounders + Interaction = mod_5) %>% map_dfr(~get_fx(tbl, .x), .id = "Model Type") %>% arrange(Effect) %>% kable() Model Type Effect Estimate p.value No Confounders NDE 2.033 < 0.001 Confounder 1 NDE 1.444 < 0.001 Confounder 2 NDE 1.950 < 0.001 Both Confounders NDE 1.371 < 0.001 Both Confounders + Interaction NDE 0.818 < 0.001 No Confounders NIE 3.223 < 0.001 Confounder 1 NIE 2.304 < 0.001 Confounder 2 NIE 2.311 < 0.001 Both Confounders NIE 1.653 < 0.001 Both Confounders + Interaction NIE 0.654 < 0.001 Things worked out so that our NDE estimate is close to the regression result in the model that includes the interaction. However, the NIE is way off when compared to the correct formula presented above. To accurately account for mediators that interact with the treatment, we cannot simply multiply two coefficients together. ## Conclusion This post summarizes the history of mediation analysis, including modern approaches based on the potential outcomes framework. Although the examples were based on simple, linear models with a binary treatment, it should be apparent that traditional applications of mediation modeling can fail without considering the assumptions underlying the model. Researchers commonly draw all kinds of boxes and arrows, hit Run in their software, and report the results. The Baron and Kenny and related approaches require very simple models that fully account for all confounders, do not contain interactions, and do not have multiple mediators affecting each other. When the careless modeler starts drawing arrows from any exogenous variable to multiple intermediate variables, the results may easily turn out to be meaningless. ## Citations Acharya, A., Blackwell, M., & Sen, M. (2016). Explaining causal findings without bias: Detecting and assessing direct effects. American Political Science Review, 110(3), 512-529. Baron, RM and Kenny, DA. (1986). The Moderator-Mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51:1173–1182. Imai, K., Keele, L., & Yamamoto, T. (2010). Identification, inference and sensitivity analysis for causal mediation effects. Statistical Science, 51-71. Imai, K., & Yamamoto, T. (2013). Identification and sensitivity analysis for multiple causal mechanisms: Revisiting evidence from framing experiments. Political Analysis, 21(2), 141-171. Pearl, J. (2001). Direct and indirect effects. In Proceedings of the seventeenth conference on uncertainty in artificial intelligence, 411-420. Morgan Kaufmann Publishers Inc.. Pearl J. (2014) Reply to commentary by Imai, Keele, Tingley, and Yamamoto, concerning causal mediation analysis. Psycholgocal Methods 19: 488–492 Sobel, M. E. (1982). Asymptotic confidence intervals for indirect effects in structural equation models. Sociological methodology, 13, 290-312. Tingley, D., Yamamoto, T., Hirose, K., Keele, L., and Imai, K. (2014). mediation: R Package for Causal Mediation Analysis. Journal of Statistical Software, 59(5), 1-38. VanderWeele, T. J. (2015). Explanation in causal inference: Methods for mediation and interaction. New York: Oxford University Press. VanderWeele, T. J., & Vansteelandt, S. (2009). Conceptual issues concerning mediation, interventions and composition. Statistics and its Interface, 2(4), 457-468. Vansteelandt, S. (2009). Estimating direct effects in cohort and case–control studies. Epidemiology, 851-860.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 28, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9794433116912842, "perplexity": 1763.3526769531638}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039503725.80/warc/CC-MAIN-20210421004512-20210421034512-00053.warc.gz"}
https://www.physicsforums.com/threads/3-simple-questions.10468/
# 3 simple questions 1. Dec 7, 2003 ### suzy 1. you are in a rocket ship travelling away from the Earth. Your mother, who is on Earth sends you a message using light with a frequency nu. Should you adjust your receiver to be sensitive to a frequency greater than, the same as, or less than nu? Solution: I'm travelling away from Earth so I should adjust my receiver to a frequency greater than or the same as nu. 2.The force of gravity acting on an object on Earth is almost the same as the force of gravity acting on a space shuttle in orbit. However, the astronauts in an orbiting space shuttle float around the cabin weightlessly. Why? Solution: The gravity that is acting on the astronauts is not strong enough so they float weightlessly in cabin. 3. A photon has an energy of E=5x10^(-19) J. An electron has a momentum p=9.5x10^(-25)kg.m/s. Do these particles have the same wavelength and energy? For an electron, the de Broglie relation is wavelength=h/p and its nonrelativistic energy is E=p^2/(2m). Note: Planck's constant h=6.626x10^(-34) J.s, mass of an electron m=9.1x10^(-31) kg, speed of light c=3x10^8 m/s. Solution: I have no clue.... Thanks for helping me... ~.~ 2. Dec 7, 2003 ### Norman First question: You are correct(Doppler Effect) Second question: The question states that the force of gravity is almost the same in the shuttle as it is on earth, so your answer is incorrect. You need to think about what weigthlessness is and you should be thinking about the idea of freefall. Third question: You are given all that you need to solve this problem, at least the energy part. You are given the momentum of the electron and the equation for that relates the momentum of the electron to its energy. For the wavelength part: You can find the de Broglie wavelength of the electron, so all you need to do is figure out how a photon's energy is related to its wavelength (or frequency since you know that c=f*lambda, where lambda is the wavelength, and f is the frequency) good luck, Ryan 3. Dec 8, 2003 ### HallsofIvy Staff Emeritus Norman said you were correct and I see his reasoning but, in my opinion, you haven't answered the question. You were given three options and you answer is two of them! 4. Dec 8, 2003 ### Staff: Mentor Regarding the rocket ship question: Norman didn't supply any reasoning. But you are right, suzy did give two answers--- but neither is correct. (The receiver is moving away from the source.) 5. Dec 8, 2003 6. Dec 8, 2003 ### NateTG Regarding Question 2: Actually, people can also float inside airplanes that are much closer to the ground. Perhaps there is a better explanation? 7. Dec 8, 2003 ### JonnyW 'Actually, people can also float inside airplanes that are much closer to the ground. Perhaps there is a better explanation?' That would be when the plane is accelerating towards Earth faster than 9.81 ms^-2 and this can only be done for short periods of time. The effect is that you are still accelerating towards Earth but as the object you are in is as well, you feel weightless. 8. Dec 8, 2003 ### suzy Thank you for helping me out. ^.^ 1. Solution : Due to a Doppler Effect and the receiver is moving away from the source so the frequency must be greater than or the same as source. 2. Solution: I'm not sure about this one. I understand that if shuttle falls to the Earth faster 9.8 ms^s, the astronauts will be weightless. But the question is when shuttle is orbiting in space how can astronauts fly wieghtlessly? 3. Solution: I still don't understand this at all.....help help.... Thanks Suzy ^.^ 9. Dec 9, 2003 ### Staff: Mentor The apparent weightlessness of astronauts in the shuttle is due to the fact that they (and the shuttle) are in free fall. This can happen in outer space or it can happen right outside your window. The astronauts still have weight (gravity didn't go away!) but they don't feel that weight because nothing is pushing against them. Stand on the roof of a building; you can feel it push against you, holding you up. Step off the building, and you will be "weightless" for a brief period (ignoring the air rushing by). Same is true in an airplane that turns off its engines and starts to plummet. The reason that "weightlessness" is associated with things in orbit is that orbiting bodies are continuously free falling---so there is plenty of opportunity to experience the apparent weightlessness. 10. Dec 9, 2003 ### JonnyW Question the third: Energy of the photon is 5x10^(-19) J Energy of electron = it's momentum squared over twice it's mass, therefore E = (9.5x10^(-25))^2 / 2x9.1x10^(-31) = 4.96x10^(-19) J which is as near as damn it to the energy of the photon. Energy of a photon, E = hf, where c = fw, w is the wavelength, therefore E = hc / w and w = hc / E so the wavelength of the photon is 397.8nm The wavelength and momentum of a particle is related by wp = h, so w = h / p = 0.698nm Conclusion - energies are the same, wavelengths aren't Correct me if I'm wrong please
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8595753312110901, "perplexity": 833.8007329684032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721555.54/warc/CC-MAIN-20161020183841-00540-ip-10-171-6-4.ec2.internal.warc.gz"}
https://thermtest.com/papers_author/wei-gu
# Thermal Conductivity Paper Database ## Recommended Papers for: Wei Gu Total Papers Found: 2 #### Influence of environmental factors on the adsorption capacity and thermal conductivity of silica nano-porous materials Aerogels have a wide range of properties that make them effective thermal insulators. However, these properties are lost when the materials absorb too much water. The goal of this experiment was to determine how temperature and humidity affect the insulative properties of silica nano-porous aerogels. ... Author(s): #### A numerical study on the influence of insulating layer of the sensor on the thermal conductivity measuring accuracy In this study, the thermal properties of stainless steel, ceramic, and silica aerogels were numerically calculated and then compared to actual test results. A thermal constants analyser measured the thermal conductivities and thermal diffusivities of the samples using the transient plane source (TPS) method. Results ... Author(s):
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8371725678443909, "perplexity": 2419.7221562680515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00538.warc.gz"}
http://math.stackexchange.com/users/20150/student
# student less info reputation 717 bio website location age member for 2 years, 3 months seen Jan 30 at 3:41 profile views 216 # 29 Questions 25 When is the product of $n$ subgroups a subgroup? 14 The space of Riemannian metrics on a given manifold. 9 A subgroup such that every left coset is contained in a right coset. 8 Diffeomorphism group of the unit circle 7 Existence of a Riemannian metric inducing a given distance. # 1,469 Reputation +20 The space of Riemannian metrics on a given manifold. +10 What does a subspace spanned by another subspace and a vector mean? +20 Example of a flat manifold with non-zero (global) holonomy group. +10 Properties preserved by diffeomorphisms but not by homeomorphisms 17 What's the name for the property of a function $f$ that means $f(f(x))=x$? 12 Is $\int_a^b f(x) dx = \int_{f(a)}^{f(b)} f^{-1}(x) dy$? 8 Can the Schwarz lemma be extended to functions $f : D \to \overline{D}$? 7 Example of a flat manifold with non-zero (global) holonomy group. 6 What does a subspace spanned by another subspace and a vector mean? # 54 Tags 18 calculus × 5 16 complex-analysis × 9 17 abstract-algebra × 6 15 differential-geometry × 15 17 complex-numbers × 2 12 integration × 3 17 matrices 10 linear-algebra × 4 17 definition 7 intuition × 2 # 6 Accounts Mathematics 1,469 rep 717 MathOverflow 180 rep 15 TeX - LaTeX 153 rep 4 English Language & Usage 108 rep 3 Super User 103 rep 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682288527488708, "perplexity": 1203.0885452349414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011159105/warc/CC-MAIN-20140305091919-00025-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.yumpu.com/en/document/view/19389462/isbn-978-83-933105-0-0-vixra
# ISBN 978-83-933105-0-0 - viXra ISBN 978-83-933105-0-0 - viXra ISBN 978-83-933105-0-0 ISBN 978-83-933105-0-0 1 The Everlasting Theory and Special Number Theory Sylwester Kornowski Acknowledgments I am enormously grateful to Paul Walewski for comments on part of the manuscript and meticulous care with the copy-editing. Contents Abstract 2 1 Experimental Data and Program of Ultimate Theory 2 Phase Transitions of Newtonian Spacetime, Neutrinos, 4 Nucleons, Electrons, Pions and Muons 11 3 Interactions 27 4 Structure of Particles (continuation) 44 5 Liquid-like Plasma 54 6 New Cosmology 56 7 Four-shell Model of Atomic Nucleus 73 8 Mathematical Constants 79 9 Fractal Field 83 10 New Big Bang Theory 87 11 Reformulated Quantum Chromodynamics 90 12 Proton and Loops as Foundations of Theory of Chaos 99 13 Theoretical Curve for the Kaon-to-Pion Ratio 104 14 The Cross Section for Production of the W Boson 106 15 Neutrino Speed 108 16 M-theory 114 17 Perihelion Precession of Mercury and Venus 116 18 Foundations of Quantum Physics 117 19 Foundations of General Theory of Relativity 20 The Combination of Quantum Physics 119 and General Theory of Relativity 21 General Relativity in Reformulated QCD 121 and New Cosmology 22 Electroweak Interactions, Non-Abelian Gauge Theories 122 and Origin of E = mc 2 125 Recapitulation and Ultimate Equation 128 Definitions 138 2 Abstract: The Everlasting Theory is the lacking part of the ultimate theory and is free from singularities and infinities. There are the two long-distance interactions. This suggests that there are two parallel spacetimes. Density of the most fundamental spacetime, which I call Newtonian spacetime or modified Higgs field, leads to the gravitational constant whereas density of the second spacetime, which I refer to as the Einstein spacetime, leads to the finestructure constant. The nature on its lowest levels once again behaves classically. The components of the two parallel spacetimes are today the classical objects. This causes that the complete set of the initial conditions, which describe properties of the two spacetimes, is not numerous and very simple. The fundamental conditions lead to the initial conditions applied in the mainstream theories i.e. the General Theory of Relativity and the Quantum Theory of Fields but also to the masses of leptons and quarks applied in the Standard Model. Since I start from the fundamental initial conditions, I proved that the initial conditions applied in the mainstream theories are incomplete and there appear many incorrect interpretations. The main reason that the mainstream theories must be reformulated is the fact that they neglect the classical internal structure of the bare fermions. In reality, there is torus and ball in its centre. The surfaces of the tori inside the bare fermions look as the Ketterle surface for strongly interacting gas. It is very difficult to describe mathematically such structure to add it to Lagrangian. We must apply new methods. In the last section titled “Definitions”, I widely described the relations between presented here the Everlasting Theory and the mainstream theories. I described origin of Higgs mechanism and hierarchy problem, Planck critical quantities, confinement and mass gaps, hadronization and limitations in the Quantum Chromodynamics. Presented here the confinement breaks symmetry between gravity and weak interactions. Contrary to the mainstream theories, the Everlasting Theory acts correctly at whole spectrum of sizes. To explain the inflation, long-distance entanglement, cohesion of wave function and constancy of the speed of light, we need the fundamental spacetime composed of tachyons. The tachyons have inertial mass only i.e. they are the gravitationally massless particles. Moreover, their mean spin is in approximation 10 67 times smaller than the reduced Planck constant. This means that in approximation we can assume that the fundamental spacetime is the gravitationally massless scalar field. There are the two basic phenomena. The saturation of interactions of the tachyons leads to the phase transitions of the fundamental/Newtonian spacetime. The first phase transition leads to the closed strings the neutrinos consist of, the second leads to the Einstein spacetime, third to the core of baryons whereas the fourth to the cosmic object, the Protoworld, after the era of inflation (there appears the new cosmology). In Einstein’s spacetime the quantum effects and fractal objects appear. The second phenomenon, i.e. the symmetrical decays of the bosons in very high temperatures, leads to the Titius-Bode law for the strong interactions and to the Titius-bode law for the gravitational interactions acting nearly the black holes. Due to the Titius-Bode law for the strong interactions, there appears the atom-like structure of baryons. The core of baryons is the black hole in respect of the strong interactions whereas the ball in its centre is the black hole in respect of the weak interactions. Their masses are quantized so they emit the surplus energy. The same concerns the gravitational black holes. On base of these two phenomena and the 7 parameters only, I calculated several hundred basic theoretical results consistent or very close to experimental data. I calculated the basic physical constants as well and mass of electron. Due to the fact that the nature on its lowest levels once again behaves classically, the lacking part of the ultimate theory, i.e. the Everlasting Theory, is mathematically very simple, even simpler than the Newtonian mechanics. But there appear the Uncertainty Principle and the relativistic formulae as well. The E. Kasner solution for the flat anisotropic model (1921) in the General Theory of Relativity leads to the numbers characteristic for the bare fermions, especially for the tori. On 3 the other hand, the internal structure of the bare fermions leads to the known interactions and the quantum behaviour of the electron. Electron consists of the Einstein spacetime components and due to the fundamental/Newtonian spacetime, it can disappear in one place and appear in another and so on. Such behaviour leads to wave function. We can see that quantum behaviour follows from existence of the two parallel spacetimes. Value of the gravitational constant depends on the internal structure of the neutrinos and inertial mass density of the Newtonian spacetime. This means that Quantum Gravity is associated with the quantum behaviour of the neutrinos. Neutrinos consist of the binary systems of the closed strings so neutrinos can be the quantum particles only in spacetime composed of the binary systems of the closed strings. Such spacetime was in existence only in the era of inflation. In this era, this spacetime decayed into small regions and today the binary systems of the closed strings are inside the neutrinos. The Quantum Gravity was valid in the era of inflation only. Today the gravity is classical because due to the lack of spacetime composed of the closed strings there cannot be created the neutrino-antineutrino pairs similarly as the electronpositron pairs from the Einstein spacetime components. The Kasner solution and the scales for the charges (weak, electric and strong) in the generalized Kasner solution and the BKL oscillatory model, lead to the phase transitions of the fundamental spacetime and to the Protoworldneutrino transition that caused the exit of the early Universe from the black-hole state. The phase transitions are the foundations of the modified/useful string/M theory. There is also the ultimate equation that combines the masses of sources of all types of interactions. The Kasner solution leads to the new cosmology as well. We can say also that the Kasner solution is the foundations of the Quantum Theory of Gravity and Quantum Theory of Fields without singularities and infinities. The Everlasting Theory based on the phase transitions of the fundamental/Newtonian spacetime shows where the non-Abelian gauge theories become useless. Due to the phase transitions and entanglement, the new fields have the torus-like shapes. They behave in different way than the gauge fields then we must apply new methods. The symmetry group SU(3)×SU(2)×U(1) is incomplete in low-energy regime. There is lack of the stable structures that appear due to the phase transitions of the Newtonian spacetime. The incompleteness causes that the Standard Model does not lead to the superluminal neutrinos which appeared in the supernova SN 1987A explosion and does not lead to the masses of nucleons. The Everlasting Theory shows that the liquid-like plasma obtained in the high-energy collisions of nucleons consists of the cores of baryons. Within reformulated Quantum Chromodynamics, I described the electron-positron and nucleon-nucleon collisions. The new structure of proton and loops is the foundations of the theory of chaos. The structure of proton leads to the Feigenbaum scaling whereas the loops to the Mandelbrot-like set. I wrote the generalized Schrödinger equation that contains gravity and showed how we can obtain the generalized Dirac equation. I described also the perihelion precession of Mercury and Venus and solved the 4/3-factor problem for mass-energy relation for classical electron. The origin of DNA follows from the reformulated QCD. The ultimate theory must contain non-perturbative and perturbative theories. The ground state of the Einstein spacetime consists of the non-rotating-spin neutrino-antineutrino pairs. The total helicity of this state is zero and it consists of particles which spin is unitary. In such spacetime cannot appear loops which have helicity so mass as well. In reality, a unitary-spin loop (the loop state) is the binary system of two entangled half-integral-spin loops with opposite helicities i.e. the resultant helicity is zero. In such spacetime do not appear turbulences. Such loop can easily transform into a fermion-antifermion pair (the fermion state). Perturbation theories concern the loop states whereas the non-perturbative theories the fermion states so we cannot neglect the structure of bare fermions. 4 Experimental Data and Program of Ultimate Theory The direct and indirect evidences that there are in existence the superluminal particles are as follows. There are the superluminal neutrinos. Entangled photons show that they can communicate with speeds higher than the c. The wave functions fill the whole our Universe. The wave function describing our Universe can be the coherent mathematical object if the very distant points of the wave function can communicate with speeds much higher than the c. We can say that coherent quantum physics needs the tachyons. Also the Michelson-Morley experiment leads to conclusion that masses emit the tachyons because then the speed of light in relation to the field composed of the tachyons and ‘attached’ to a mass does not depend on rotation of the mass and its other motions. In the Einstein General Theory of Relativity we apply formula for the total energy E of particles in the Einstein spacetime in which the mass M is for inertial mass equal to gravitational mass. Assume that the word ‘imaginary’ concerns physical quantities characteristic for objects that have broken contact with the wave function that describes state of the Universe. This means that such objects cannot emit some particles. Assume that the tachyons are the internally structureless objects, i.e. they are the pieces of space, so they cannot emit some objects. From this follows that the tachyons have only the inertial mass m. Substitute ic instead c, iv instead v and im instead M, where i = sqrt(–1). Then the formula for the total energy N of a gas composed of tachyons is: N = – imc 2 /sqrt(1 – v 2 /c 2 ) = mc 2 /sqrt(v 2 /c 2 – 1). We can see that the Theory of Relativity leads to the imaginary Newtonian spacetime composed of the tachyons i.e. to the fundamental spacetime. The Theory of Relativity is the more fundamental theory than the Quantum Physics. The Quantum Physics appears on higher level of nature and is associated with the excited states of the Einstein spacetime. There are in existence two spacetimes i.e. the Einstein spacetime and the imaginary Newtonian spacetime. I will show that the phase transitions of the imaginary Newtonian spacetime lead to the Einstein spacetime. My theory shows that tachyons are moving with speeds about 8·10 88 times higher than the c. The total energy T of the two spacetimes we can define as the sum of the energy E that appears in the Theory of Relativity and the imaginary energy N associated with the Newtonian spacetime: T = E + iN. The m is in proportion to volume of tachyon i.e. m = aV so N = aVc 2 /sqrt(v 2 /c 2 – 1). We can see that when speed of a tachyon increases then its energy decreases. It is possible only due to the higher grinding of tachyons when they move with higher speed. For infinite speed of a tachyon, its volume is equal to zero i.e. in the ‘gas’ there is infinite number of mathematical points moving with infinite speeds. But such state of the gas composed of tachyons cannot be realized because the total volume of the increasing number of tachyons still must be the same and positive. The Everlasting Theory starts with three assumptions: 1. That there exists the Newtonian spacetime that is composed of structureless tachyons that have a positive mass; 2. That there are possible phase transitions of the Newtonian spacetime; and 3. That among other stable objects arising due to the phase transitions of the Newtonian spacetime, the massive core of baryons arises. Due to the symmetrical decaying of virtual 5 bosons, outside the massive core, the use of the Titius-Bode law for the strong interactions is obligatory. This will lead to an atom-like structure of baryons. The Newtonian spacetime maintains a classical approach i.e. the behaviour of tachyons cannot be described by a wave function due to the lack of more fundamental spacetime. Nature begins from classical objects whereas the quantum physics approach on the higher levels of nature. The diagram entitled ‘Main ideas’, shows the main structure of the everlasting theory. In general, the Einstein theories of relativity describe the motions of particles in smooth gravitational field. By and large, quantum physics describes the interactions of particles with fields via quantum fields (i.e. via unsmooth fields where quantum particles appear). The quantum particles disappear in one place of a field or spacetime and appear in another and so on. Unification of the smoothness and the ‘roughness’ of fields within one mathematical description is, however, still not realized. The diagram shows that to understand the differences between general relativity and the quantum physics, we must be familiar with the internal structure of Einstein spacetime and bare particles. The Everlasting Theory is the lacking part of the ultimate theory. The Everlasting Theory is the theory of internal structures and interactions of the stable objects and the two spacetimes. Even the quantum particles for the periods of spinning are the stable objects. The stable objects arise due to the phase transitions of the imaginary Newtonian spacetime. The key components of the Everlasting Theory are the properties of the imaginary Newtonian spacetime, its phase transitions leading to the stable objects and the Titius-Bode law for the strong interactions that leads to the atom-like structure of baryons. The ground state of the Einstein spacetime consists of non-rotating-spin binary systems of neutrinos. To detect the non-rotating-spin binary systems of neutrinos we must measure mass with accuracy about 10 -67 kg. No one has identified the products of neutrino-antineutrino annihilations. This suggests that in the today Universe the neutrinos are the non-quantum particles i.e. their state does not describe a wave function due to a too low energy density of Newtonian and Einstein spacetimes. In the Einstein’s spacetime, the virtual particleantiparticle pairs can arise. Photons are the rotational energies of the entangled Einstein spacetime components. The c=299,792,458 m/s is the natural speed of the entangled binary systems of neutrinos in the gradients of gravity ‘attached’ to the masses. Due to the Newtonian spacetime, the photons can also behave as quantum particles i.e. their energy can 6 disappear in one place of Einstein’s spacetime and appear in another and so on. The nonrotating-spin binary systems of binary systems of neutrinos (the neutrino bi-dipoles) with parallel spins carry the gravitational energy. Due to the internal structure of the rotating neutrino bi-dipoles, they behave as two entangled photons. Gravitons are not in existence. Gradients produced in Newtonian spacetime by neutrinos are impressed on the Einstein spacetime as well. The gravitational constant depends on the internal structure of neutrinos and properties of the Newtonian spacetime. The phase transitions of the Newtonian spacetime show that cosmology should begin from different initial conditions than the Cosmological Standard Model. Conclusions from experimental data 1. Pions appear in the main channels of the decay of the Lambda and Sigma+ hyperons. During the decay of the hyperon Lambda, negatively charged and neutral pions appear. On the basis of this experimental data [1] we can assume that a neutron with a probability of x about 0.63 is composed of a positively charged core and a negative pion. Furthermore, the probability (1-x) is composed of a neutral core and a neutral pion. During the decay of the hyperon Sigma+, neutral and positively charged pions appear. On the basis of this experimental data [1] we can assume that the proton with a probability y about 0.51 is composed of a positively charged core and a neutral pion and the probability (1-y) is composed of a neutral core and a positive pion. 2. We know that the nucleon-nuclear magnetic moment ratios are about +2.79 for a proton [1] and -1.91 for a neutron [1]. On the basis of these experimental results, we can assume that the mass of the charged core is about H(charged)~727 MeV and the relativistic 7 charged pion is W(charged)~216 MeV. Such values of the probabilities and masses lead to the experimental data for magnetic moments. 3. During the extreme energetic collisions of ions, a liquid-like substance appears [2]. This also suggests that there is a massive core inside a nucleon. 4. The triplet n-p scattering length is approximately 5.4 fm. The singlet n-p effective range is approximately 2.7 fm whereas the triplet n-p effective range is approximately 1.7 fm. Assume that outside of the core of nucleons the Titius-Bode law for strong interactions r(d)=A+dB where A~0.7 fm, B~0.5 fm, and d=0, 1, 2, 4 is obligatory. The diameter of the last ‘orbit’ is, therefore, 2r(d=4)=2(A+4B)=5.4 fm, the radius of last orbit is r(d=4)=A+4B=2.7 fm, whereas the radius of the last but one orbit is r(d=2)=A+2B=1.7 fm. 5. Observed entangled particles separated spatially need superluminal particles. 6. We know that gravitational constant has the same value for all mass. This and the Planck length suggest that whole matter should be composed of inflexible particles having size close to the Planck length – they are the neutrinos. 7. Very dense cosmic objects, for example the NGC 4261 galaxy (there is ‘point’ mass in centre of ring/torus), and some stable particles having a high internal energy density should appear similar because the macrocosm and microcosm describes the same set of physical laws. 8. The creation of one additional baryon for approximately a billion baryon-antibaryon annihilations leads to the temperature of the Universe today being a few hundred billion times higher than the measured. This suggests that baryon-antibaryon symmetry was broken before the ‘soft’ big bang after the period of inflation. 9. We are unable to see the bi-products of neutrino-antineutrino annihilation. This suggests that neutrinos are very stable particles and this suggests that the oscillation of neutrinos is impossible as well. The observed ‘oscillations’ of neutrinos are due to the fact that the Einstein spacetime consists of the binary systems of neutrinos. In fact, we observe the exchanges of free neutrinos for the neutrinos in the binary systems of neutrinos. Why we must change the physical vision of nature Have the bare particles an internal structure? Why are theories associated with particle physics extremely complicated? Authors of these theories assume that bare particles are point particles or closed strings and have a size of about 10 -35 m. This, in fact, is not true. The phase transitions of the Newtonian spacetime show that the bare particles have a very rich internal structure. Interactions of the bare particles with fields depend on their internal structures. Various theories show that these internal structures are neglected or are difficult to understand. As a result, there appear strange properties of the fields and postulated particles to obtain theoretical results consistent with experimental data. We can for example remove almost all of the diagrams in the QED when we take into account the weak interactions of the bare electrons. The new electroweak theory is equivalent to the QED because the Einstein spacetime composed of the binary systems of neutrinos can carry the electromagnetic and weak interactions. Moreover, the electromagnetic mass of an electron-positron pair is equal to the bare mass of electron calculated within this theory. This is the Feynman ‘hocus-pocus’ 8 which causes that the QED and the theory of electrons and photons presented within the Everlasting Theory are the equivalent theories. The new electroweak theory is nonperturbative. The higher dimensions and flexible strings in the string/M theory are consequently not necessary. We can replace the higher dimensions enlarging phase spaces. In understanding the internal structure of bare particles, we can very easy calculate the total cross sections, lengths of scattering and effective radii without applying the theory of scattering. Due to the stable objects, the non-perturbative everlasting theory is very simple in comparison to the Standard Model or string/M theory or Cosmological Standard Model and as a result the number of parameters is reduced to seven. Can one formula describe all interactions? In the formula coupling-constant=G(i)Mm/(ch), the M defines the total mass of the source(s) of interactions being in touch plus the mass of the simplest component of the field responsible for the interactions (for example, for strong interactions they are the gluons for mesons and gluon loops for baryons). Whereas the m is the mass of the carrier of the interaction (for example, for the strong interactions they are the gluon loops for mesons and pions for baryons). The constants of interactions G(i) are directly proportional to the mass densities of fields, for example the ratio of the G(i) for electromagnetic interactions to the gravitational constant (i.e. for the long-distance fields) is equal to approximately 4.2·10 42 . Such a definition leads to the correct values for coupling constants for low and high energies. The above formula shows that for particles without mass the coupling constant is equal to zero. It is obvious that for strong and electromagnetic interactions we cannot apply massless particles. We can see that the massless particles can be responsible for the interactions after their transformation into particles carrying mass. The entangled photons can transform into the electron-positron pairs whereas the entangled gluons into loops or balls carrying mass or into pions. Scientists do not fully understand Einstein’s formula E=mc 2 and that the origin of energy and mass is different. This formula follows from the law of conservation of spin and constancy of the natural speed of the entangled binary systems of neutrinos in the gradients of gravity ‘attached’ to mass. Energy is associated with the motions of mass. In electromagnetism, we can separate pure energy (i.e. the photons) from an field carrying photons, i.e. from Einstein spacetime having mass density. Photons cannot exist without the Einstein spacetime. Without the Einstein spacetime, the photons cannot transform into the electron-positron pairs i.e. they cannot carry the electromagnetic interactions. The carriers of the gravitational force must have mass as well. How should we define mass? Mass is directly proportional to the total volume of the structureless tachyons that a particle consists of, whereas energy is defined by motions of this mass. When in the Einstein’s spacetime there appears a loop or a particle accelerates, then there decreases local pressure in this spacetime that increases the local mass density of the Einstein spacetime. We can say that mass (or volume of tachyons) and energy (or motions of tachyons) are the two everlasting attributes of nature and the inertial and gravitational masses have the same origin. The volume of the structureless tachyons defines all masses i.e. inertial, gravitational, and relativistic. Can the baryons have an atom-like structure? The definition of the Planck length l=(Gh/(2πc 3 )) 1/2 =1.6·10 -35 m suggests that the similarity of structures can be broken at most for sizes smaller than about 10 -35 m. We see that galaxies, the solar system and atoms all have an atom-like structure. Since the baryons have sizes much greater than the Planck length, so they should also take the form of an atom-like structure. My theory is that there is a massive core and outside there is the Titius-Bode law that is obligatory for strong interactions. On orbits are pions. In strong fields, pions behave in a similar way to electron-electron pairs in the ground state of atoms, which leads to the selection rules inside baryons. Has the supersymmetry different interpretation? Since the total internal helicity of the fields must be equal to zero all fermions (all fermions have internal helicities not equal to zero) arise 9 as fermion-antifermion pairs i.e. bosons. Such pairs behave like bosons. For example, electrons arise as electron-positron pairs, closed strings arise as closed string-antistring pairs, and so on. Such phenomena that cause the quantum effects in the Einstein and Newtonian spacetimes are ‘softened’ because the internal helicity of the fields is still equal to zero. This is the reason why fields carrying forces are composed of bosons. There is also the fermionboson supersymmetry that follows from the phase transitions of the imaginary Newtonian spacetime. Inside the stable objects (fermions) appear the loops (bosons). The ratio of the masses of a stable object to the associated loop is 10.77. The postulated exotic particles are not in existence. Summary Theory starting from the gas composed of the tachyons is more fundamental than the Theory of Relativity and the Quantum Physics. In the QCD, there is the procedure error for the low-energy regime. At first there should be defined the exact masses of the up and down quarks and next, from these parameters, we should derive the properties of the resting nucleons i.e., among other things, we should calculate the masses of the nucleons and their magnetic moments. The big problems to calculate these physical quantities from the initial parameters follow from the procedure error for the low-energy regime. The Everlasting Theory shows that it is not true that almost whole mass of resting nucleons (it is the low-energy regime also) is the relativistic mass of the up and down quarks. We can say that due to the procedure error the QCD is incorrect for the low-energy regime. In the theory of the weak interactions, there is the mass-hierarchy error for the low-energy regime. The W and Z bosons are responsible for the weak interactions for energies higher than about 125 GeV, not for lower. This causes that the calculated values of the coupling constants for the weak interactions in the low-energy regime are incorrect. This causes that there appears the hocus-pocus in the QED and causes that there is no proof that QCD ‘confines’ for low energies. The new theory of the weak interactions shows also that the Yang-Mills theory has a mass gap i.e. the weak interactions in the ground state of the Einstein spacetime cause that the massless fields acquire mass. Due to the internal structure of the core of baryons described within the Everlasting Theory, we can eliminate all the problems that appear in the QCD and electroweak theory in the lowenergy regime. Equations relying on time should describe the motions and interactions, however, such equations are already in existence. The string/M theory based on vibrations of a flexible closed string leads to too many solutions. We need a theory describing phase transitions of the Newtonian spacetime. This should lead us to understanding the internal structures of stable objects and fields and to the postulates applied in the general theory of relativity (constancy of the speed of light in the Einstein spacetime and equivalence of the inertial and gravitational masses) and quantum physics (physical meaning of the uncertainty principle and the wave function). We also need a correct and detailed theory relating to baryons. There appear the reformulated QCD and theory of chaos. The phase transitions of the Newtonian spacetime lead to the useful M-theory. The useful M-theory is the part of the Everlasting Theory. To describe the internal structure of baryons, we need something beyond the useful M-theory i.e. the Titius-Bode law for the strong interactions. References [1] K. Nakamura et al. (Particle Data Group), J. Phys. G 37, 075021 (2010) [2] J Stachel; Has the Quark-Gluon Plasma been seen?; http://arxiv.org/abs/nuclex/0510077 (2005). 10 11 Phase Transitions of Newtonian Spacetime, Neutrinos, Nucleons, Electrons, Pions and Muons Introduction In the previous chapter, I set forth the experimental data suggesting how the ultimate theory should look. I also formulated the program of the Everlasting Theory i.e. the lacking part of the ultimate theory. Here I described the phase transitions of the gas-like Newtonian spacetime and the internal structure of main particles. Assume that the Newtonian spacetime is an ideal gas in the zero-dimensional infinite volume. The gas is composed of structureless tachyons that have a positive mass. Mass of tachyon is directly proportionate to its volume. Assume that the Einstein spacetime is a gas composed of binary systems of neutrinos. Initial conditions are the six parameters describing physical state of the Newtonian spacetime plus mass density of the Einstein spacetime. The mass density of the Einstein spacetime is the seventh parameter because it does not follow from the six parameters defining the Newtonian spacetime. Particles consist of the Einstein spacetime components. Creations and annihilations of particles change local mass density of the Einstein spacetime. The initial seven parameters listed in Fig. titled ‘The parameters in the Everlasting Theory’ describing the properties of the Newtonian and Einstein spacetimes can be replaced with a new set of parameters listed below – as a result the ultimate theory is then mathematically at its simplest. We can derive the new set of parameters from the initial set of parameters describing the properties of the Newtonian and Einstein spacetimes. That means that these sets of parameters are equivalent. The calculated values of the new parameters are in accordance to the experimental data [1]. The calculated values of the new parameters are as follows: Gravitational constant: G = 6.6740007·10 -11 m 3 /(kg s 2 ) Half-integral spin: h/2= 1.054571548·10 -34 /2 Js Speed of light in spacetimes: c = 2.99792458·10 8 m/s Electric charge of electron: e = 1.60217642·10 -19 C Mass of electron: melectron = 0.510998906 MeV Mass of free neutral pion mpion(o),free = 134.97674 MeV 12 Mass of charged pion: mpion(+-) = 139.57041 MeV. The phase transitions of the Newtonian spacetime Since tachyons have linear and rotational energies the rotary vortices appear, i.e. the closed strings having internal helicity (see Fig. titled “Anticlockwise internal helicity”). A closed string is stable because the internal helicity and dynamic viscosity cause the Newtonian spacetime near the closed string to thicken. Because of the shape of a closed string, the pressure is lowest on its internal equator (see Fig. titled “Stable tori”). This means that the thickened Newtonian spacetime becomes detached from the closed string on the internal equator of it which leads to a negative pressure inside the closed string near it. There appears a collimated jet in the Newtonian spacetime. Closed strings appear on the surfaces of regions with tachyons packed to the maximum. The probability of creating a maximum dense Newtonian spacetime is extremely low, however, not equal to zero. Such a state of spacetime behaves as incompressible liquid. Stable closed strings appear on the surface of a maximum dense Newtonian spacetime only if outside it the gas-like Newtonian spacetime has a strictly determined mass density. The Reynolds number NR for maximum dense Newtonian spacetime is NR = ρtvt(2rt)/η = 1.0076047·10 -19 . (1) In this definition the ρt denotes the maximum density of the Newtonian spacetime – this is the mass density of a tachyon and is ρt=8.32192436·10 85 kg/m 3 . The (2rt) is the size of the element of a closed string or distance between the layers in the liquid. Because NR=0 is for infinitely viscid fluid, the liquid behaves as a solid body and the radius of a vortex can be infinite. On the other hand, the radius of a vortex should be directly proportional to the size of the element of a vortex. We can define the radius of the spinning closed string r1 as follows r1 = (2rt)/NR = 0.94424045·10 -45 m. (2) Only closed strings that have such a radius can arise in the Newtonian spacetime but such strings are stable when the density of the gas-like spacetime is strictly determined. We see that phase transitions of the gas-like Newtonian spacetime are not always possible. The closed strings are inflexible. We can now calculate the number of tachyons K 2 a closed string consists of as follows: K 2 = 2πr1/(2rt) = (0.7896685548·10 10 ) 2 . (3) 13 The spin of each closed string is half-integral spin = K 2 mtvtr1 = h/2 = (1.054571548·10 -34 /2) Js. (4) We see that a closed string is composed of K 2 adjoining tachyons (the square of the K means that calculations are far simpler). The stable objects created during the phase transitions of the Newtonian spacetime should contain K 2 , K 4 , K 8 , K 16 tachyons. That saturates the interactions of stable objects via the Newtonian spacetime. The mass of the stable objects are directly proportional to the number of closed strings. This means that the stable objects contain the following number of closed strings: K 0 , K 2 , K 6 , and K 14 and means that the mass of the stable objects are directly proportional to K 2(d-1) , where d=1 for closed strings, d=2 for neutrinos, d=4 for the cores of baryons and d=8 for objects before the ‘soft’ big bangs suited to life. The cosmic objects defined by d=8 I will refer to as the protoworlds. The early Universe and the precursors of the DNAs arose inside the Protoworld as the cosmic loop then there appear the early universes (i.e. the cosmic loops) ‘suited to life’ – I will explain it later on. Surface mass densities for all stable objects should have the same value. Furthermore, nature immediately repairs any damages to stable objects – so they are the stable objects. This means that the radii of the stable objects should be directly proportional to K (d-1) . The first phase transition of the Newtonian spacetime leads to the closed strings with internal helicity. This suggests that all the stable objects arising due to the phase transitions of the Newtonian spacetime should have internal helicity. Spheres cannot have internal helicity. Torus is the simplest object, which can have an internal helicity. The mean radii of the tori of stable objects are rd = r1K d-1 . (5) The rest mass of the tori of the stable objects are md = m1K 2(d-1) , (6) where m1 is for the closed string. We know that following equation defines a torus: (x 2 + y 2 + z 2 - a 2 - b 2 ) 2 = 4b 2 (a 2 - z 2 ). (7) Tori are most stable when b=2a (see Fig. titled “Stable tori”). Therefore, the radius of the internal equator is equal to a. A most distant point of such torus (i.e. a point on the equator of torus) is in distance 3/2 of the mean radius resulting from (5). The radius of the equator I also refer to as external radius of torus. Spin speed on the equator of a resting torus in spacetime is 14 equal to the natural speed of the components of the torus in the spacetime. This means that for b=2a the mean spin speed of whole torus is 2/3 of the natural speed of the components of a torus in spacetime. All components of a torus must have the same resultant speed in spacetimes. Because the mean spin speed is 2/3 of the natural speed in spacetime then there appear the radial speeds of the components of a torus. From the Pythagorean’s theorem follows that the mean radial speed is Z1=0.745355992 of the natural speed in the spacetimes. Due to the radial speeds of the components of a torus, the components are going through the circular axis of torus or through the centre. Due to the b=2a the mean time of such exchanges is the same for both paths. Additional stabilization of the tori is due to the negative pressure created in thickened beams of the Newtonian and Einstein spacetimes when the beams are going through the surface of a torus and due to the exchanges of the beams created on the equators of the components of a torus. Neutrinos, electrons, cores of baryons, and the protoworlds appear similar to the NGC 4261 galaxy i.e. there is ‘point’ mass in the centre of a torus. The surface of a torus looks similar to the Ketterle surface for a strongly interacting gas [2]. The tori consist of binary systems of smaller tori. A torus is a stable object because the smaller tori exchange loops created on the equators of them. The distances between the smaller binary systems of tori are about 2πr, where r is the radius of the equator of the component. The charges and spins of particles depends on the internal structure of the tori. The torus of the neutrino consists of binary systems of closed strings. The torus of the core of baryons and electrons (electron is only polarized in a specific way in the Einstein spacetime) are composed of binary systems of neutrinos. The torus of the Protoworld (the Protoworld arose after the period of inflation) is composed of deuterium. There is attraction between closed strings in a binary system when the closed strings produce not overlapping antiparallel jets. Due to internal helicity of the closed strings in a binary system, therefore, in the Newtonian spacetime between the closed strings arises negative pressure. All spins are perpendicular to surface of the torus of a neutrino. There are four possibilities. In the weak charge of a neutrino, the senses of all spins of the closed strings are towards the circular axis of the neutrino whereas in its weak anticharge all have opposite senses. In these two cases the binary systems are the dipoles 15 (spin=1). There are also two possibilities for the antiparallel spins of the neutrinos in a binary system. In both the binary systems are the scalars (spin=0). Probability of creation of the dipoles is much higher (the components of a pair are much closer) than the scalars but the dipoles can appear only when interacts matter with antimatter. The exchanged binary systems of the neutrinos that the electrons and cores of baryons consist of make half-turns on the circular axis and in the centre of torus. Due to the law of conservation of energy, the halfturns decrease the linear speeds of the exchanged particles so decrease also the local pressure in the Einstein spacetime. It leads to the locally thickened spacetime i.e. this means circular mass on the circular axis and the point mass in centre of torus appears. Similar phenomena take place in the neutrinos and protoworlds. The surfaces of tori of neutrinos have also internal helicity. Since the neutrinos can appear as the neutrino-antineutrino pairs then the components of the surfaces of tori of neutrinos are the weak dipoles. It leads to the four states of neutrinos (there are the two orientations of the dipoles and two different helicities of the surfaces of the tori of the neutrinos). Inside the tori, from the components of spacetimes and other fields, are produced loops. From the Uncertainty Principle, for loop having spin equal to 1, we obtain that mass of a loop mloop,d is Xo times smaller than the mass of torus calculated from (6) Xo = md/mloop,d = 3πmdvdrd/h = 3π/2 = 4.71238898. (8) For example, the large loops produced inside the tori in the cores of baryons, which are responsible for the strong interactions, have mass mLL=67.5444107 MeV. The strings, neutrinos, cores of baryons and protoworlds should have the same spin. This leads to conclusion that time of an interaction depends only on involved energy so the unification of all interactions is possible. Because all elementary objects have the same spin then from following formula mvr = h/2, (9) we can calculate the natural speeds of the elementary objects in the spacetimes (the spin speed of a component of a torus on equator of the resting torus is equal to the natural speed of the component in the spacetimes). The binary systems of neutrinos on equator of the core of baryons are moving with speed equal to the c (i.e. with speed 3/2 of the spin speed resulting from (9)) and it is the natural speed of the entangled binary systems of neutrinos in the gaslike Newtonian and Einstein spacetimes c = 3h/(4m4r4) = 3h/(4mtr1K 11 ) = 299792458 m/s, (10) where mass of torus in core of baryons is X=m4=318.295537 MeV whereas radius of equator of torus in core of baryons is A=3r4/2=0.69744247 fm. The torus in the core of baryons behaves as the strong charge/mass that carries electric charge the same as positron whereas outside the strong field, due to the gluonphoton transitions, it behaves as electric charge of positron. The neutrino-antineutrino pairs are the carriers of the elementary gluons and photons. The rotating-spin pairs have three internal helicities (the three colours) but their internal structure is disclosed in the strong field only because this field in contrary to the electromagnetic field has internal helicity due to the properties of the strong charge/mass. Maximum distance of a point on internal equator of a torus from the equator of the torus is 4/3 of the distance of the point mass from the equator. Energy is inversely proportional to length of a wave. This means that we can assume that the point mass has mass about 4/3 of the mass of torus calculated from (6). The exact calculations resulting from the atom-like structure of baryons lead to Z2=1.3324865 – see the discussion below formulae (49) and (51) concerning the point mass of baryons. The internal helicity of closed string resulting from the angular speeds of the tachyons and their dynamic viscosity means that the closed strings a torus of neutrino consists of transform outside the torus the chaotic motions of tachyons into divergently moving tachyons. The direct collisions of divergently moving tachyons with tachyons the Newtonian spacetime 16 consists of produce a gradient in this spacetime. The gravitational constant is associated with gradient produced by the all closed strings a neutrino consists of. Because the constants of interactions are directly proportional to the mass densities of fields carrying the interactions then the G we can calculate from following formula G = g·ρN = 6.6740007·10 -11 m 3 /(kg s 2 ), (11) where the g has the same value for all interactions and is equal to g = vst 4 /η 2 = 25,224.563 m 6 /(kg 2 s 2 ). (12) The gradients in the Newtonian spacetime, produced by the internal helicity of the closed strings the neutrinos consist of, produce also gradients in the Einstein spacetime. Due to the binding energy mass of the core of baryons (it is 727.440 MeV – see Table 1) is 14.980 MeV smaller than the sum of the masses of the torus and point mass (see the discussion below formula (51)). This leads to conclusion that the masses of neutrinos, cores of baryons and protoworlds are about Z3=2.2854236 times greater than the mass of tori calculated from (6). For example, the mass of neutrino is mneutrino=3.3349306·10 -67 kg. The number of binary systems of neutrinos Z4 on torus in core of a baryon is Z4 = m4/(2mneutrino) = 8.50712236·10 38 . (13) Mean distance L1 of binary systems of neutrinos on surface of torus in core of a baryon is L1 = (8π 2 A 2 /(9Z4)) 1/2 = 7.08256654·10 -35 m. (14) Mean distance L2 of binary systems of neutrinos in the Einstein spacetime is L2 = (2mneutrino/ρE) 1/3 = 3.92601594·10 -32 m. (15) The ratio Z5 of the mean distances is Z5 = L2/L1 = 554.321081. (16) The Compton length λbare-electron of the bare electron is λbare(electron) = AZ5 = 3.8660707·10 -13 m. (17) The bare mass of electron is mbare(electron) = h/(cλbare(electron)) = 0.510407011 MeV. (18) Knowing that melectron=(1.0011596521735)mbare(electron) (see formula (69)), we obtain following mass of electron melectron=0.510998906 MeV (for 1MeV=1.78266168115·10 -30 kg). On comparing the two definitions of the fine-structure constant for low energies αem, we arrive at the relation ke 2 /(hc) = Gemmelectron 2 /(hc), (19) where k=c 2 /10 7 whereas Gem=GρE/ρN=2.78025274·10 32 m 3 /(kg s 2 ). From formula (19), we can calculate the electric charge e of electron e = melectron(GρE10 7 /ρN) 1/2 /c = 1.60217642·10 -19 C, (20) and next the fine-structure constant αem = e 2 c/(10 7 h) = 1/137.036001. (21) Binding energy of the large loop ΔELL, resulting from creations of the electron-positron pairs, to the mass of large loop mLL is (energy is inversely proportional to a length) ΔELL/mLL = A/(2λbare(electron)). (22) From this formula we obtain ΔELL=0.06092535 MeV. During creation of the neutral pion from two large loops, due to the electromagnetic interactions, is released energy equal to 2ΔELLαem. The total binding energy of neutral pion is ΔEpion(o) = 2ΔELL(1 + αem) = 0.12273989 MeV. (23) This means that the mass of bound neutral pion (i.e. placed in strong field) is mpion(o) = 134.96608 MeV. The energy of opened large loop is the portion of the electromagnetic energy inside baryons. Near the torus in core of baryons can appear at the same time nine opened large loops (the 8 closed large loops responsible for the strong interactions, see the discussion below formula (32), and 1 responsible for electromagnetic interactions) exchanged between nine real electron-positron pairs. Since with the rest mass of electron at the same time is associated one 17 bare electron-positron pair then the nine electron-positron pairs force production of contracted electron having mass Z6=9·1.0011596521735=9.01043687 times greater than the rest mass of electron. It is realized when with the point mass of electron interacts electron antineutrino (see discussion concerning Table 8). Sometimes negatively charged pion decays to neutral pion, electron and electron antineutrino so mass of the charged pion is mpion(+-) = mpion(o) + melectronZ6 = 139.57041 MeV. Outside the strong field the radiation mass of the neutral pion disappears so the measured mass of the free neutral pion is mpion(o),free = mpion(+-) - 9·mbare(electron) = 134.97674 MeV. The α-order correction for the radiation energy created in the interactions of the virtual or real electron-positron pairs (created by the virtual or real photons emitted by an electrically charged particle) is memc 2 = ke 2 /C, (24) where k=c 2 /10 7 , the C is the Compton wavelength of particle. The Compton wavelength of electrically charged particle is C = 2h/(cm). (25) Then from (24) and (25) we obtain mem = Cm, (26) where C=e 2 c/( h). The simplest neutral pion consists of four energetic neutrinos. The charged pion more often than not, decays into a muon and a neutrino. If we assume that these two particles arise from the bare mass of a charged pion and that the neutrino has energy equal to the one quarter of the mass of a neutral pion then the calculated mass of a bound muon is mmuon = mpion(+-) - mem-pion(+-) - mpion(o)/4 = 105.666974 MeV. (27) Due to the strong interactions, in the decays of particles most often appear the neutral and charged pions. The charged pions decay to muons. We can assume that the free neutral pions gain the mass at the cost of the mass of the free muons. It leads to conclusion that mass of free muon is mmuon,free = mmuon – (mpion(o),free - mpion(o)) = 105.656314 MeV. Simultaneously there can appear the virtual bare particle-antiparticle pair(s) that total positive mass is the sum of two the bare masses of the real particle (see definition “Virtual particles”) and the emitted binding energy by the bare real particle. Baryons Key points: *The core of baryons is the black hole in respect of the strong interactions. *Outside of the core of baryons, the Titius-Bode law for strong interactions is obligatory. Between the core and pion, lying under the Schwarzschild surface for strong interactions, electric charge is exchanged. A pion (two large loops) in strong field behaves similarly to two electrons in the ground state of an atom. This means that the selection rules for the pions and loops created in baryons appears. *A neutral pion is a binary system of two large loops composed of binary systems of neutrinos. Large loops arise on the circular axis inside the torus of the core. For the Titius-Bode law for strong interactions we can use the following formula: Rd = A + dB, (28) where Rd denotes the radii of the circular tunnels, the A denotes the external radius of the torus, d=0,1,2,4; the B denotes the distance between the second tunnel (d=1) and the first tunnel (d=0). The first tunnel is in contact with the equator of the torus. Hyperons arise very quickly because of strong interactions. They decay slowly due to the tunnels. 18 The pions in the tunnels circulate the torus. Such pions I refer to as W pions because they are associated with strong-Weak interactions. The pions behave in a similar way both in nucleons and in hyperons. Their mass is denoted by mW(+-o),d. The B we can calculate on the condition that the charged W pion in the d=1 state, which is responsible for the properties of nucleon, should have unitary angular momentum because this state is the ground state for W pions: mW(+-),d=1(A + B)vd=1 = h, (29) where vd=1 denotes the speed of the W pion in the d=1 state. We can calculate the relativistic mass of the W pions using Einstein’s formula mW(+-o),d = mpion(+-o)/(1 - vd 2 /c 2 ) 1/2 . (30) We know that the square of the speed is inversely proportional to the radius Rd (for d=1 is v 2 d=1=c 2 A/(A+B)) so from (28) and (30) we have: mW(+-o),d = mpion(+-o)(1 + A/(dB)) 1/2 . (31) Since we know the A then from formulae (29)-(31) we can obtain the B=0.5018395 fm. We see that the d=1 state is lying under the Schwarzschild surface for the strong interactions. The large loops are responsible for the strong interactions then range of such interactions cannot be greater than the circumference of the large loop i.e. should be shorter than 2.915 fm. It leads to conclusion that the radius of the last orbit for the strong interactions is A+4B=2.7048 fm. I will prove that the second solution B’=0.9692860 fm is not valid. The creation of resonance is possible when loops overlap with tunnels. Such bosons I call S bosons because they are associated with Strong interactions. Their masses are denoted by mS(+-o),d=0. The spin speeds of S bosons (they are equal to the c) differ from the speeds calculated on the basis of the Titius-Bode law for strong interactions – this is the reason why the lifetimes of resonances are short. The mass of the core of resting baryons is denoted by mH(+-0). The maximum mass of a virtual S boson cannot be greater than the mass of the core so I assume that the mass of the S boson, created in the d=0 tunnel, is equal to the mass of the core. As we know, the ranges of virtual particles are inversely proportional to their mass. As a result, from (28) we obtain: mH(+-0)A = mS(+-o),d(A + dB). (32) 19 There is some probability that virtual S boson arising in the d=0 tunnel decays to two parts. One part covers the distance A whereas the remainder covers the distance 4B. The large loops arise as binary systems (i.e. as the neutral pions) because then the strong field is more symmetrical. The part covering the distance A consists of four virtual neutral pions (i.e. of the eight large loops). Then the sum of the mass of the four neutral pions (539.87 MeV) and the mass of the remainder (187.57 MeV) is equal to the mass of the core of baryons and is equal to the mass of S boson in the d=0 state (727.44 MeV). Denote the mass of the remainder (it is the S boson) by mS(+-),d=4, then: mS(+-),d=4 = mH(+-) - 4mpion(o). (33) Since there is the positroncore-of proton transition, we should increase the mass of core by the electromagnetic energy emitted due to this transition. From this condition and using formulae (32) and (33) we have mH(+-) = mpion(o)(A/B + 4) + αemmbare(electron) =727.440123 MeV. (34) There is some analog to the energy appearing during this transition. The weak energy of the large loop is αw(proton)mLL=1.265 MeV (see formula (51)) and such energy is needed in the protonneutron transition. The nucleons and pions are respectively the lightest baryons and mesons interacting strongly, so there should be some analogy between the carrier of the electric charge interacting with the core of baryons (it is the distance of masses between the charged and neutral cores) and the carrier of an electric charge interacting with the charged pion (this is the electron). Assume that: (mH(+-) - mH(o))/mH(+-) = melectron/mpion(+-). (35) Formula (35) leads to the distance of masses between the charged and neutral core equal to 2.663 MeV. Similar value we obtain for electron (plus electron antineutrino) placed on the circular axis of the core (i.e. the point mass of electron is placed on this axis). Then the electromagnetic binding energy is 3ke 2 /(2Ac 2 )=3.097 MeV. If we subtract the mass of electron we obtain Eb1=2.586 MeV. The weak binding energy of the Eb1 interacting with the core of baryon is Eb2=3GwEb1·mH(+)/(2Ac 2 )=0.0831 MeV (see formula (50)). It leads to the distance of masses between the charged and neutral core equal to Eb1+Eb2=2.669 MeV. The results obtained from formulae (31)-(35), with the value A/B=1.389772, are collected in Table 1 (the masses are provided in MeV). Table 1 Relativistic masses d mS(+-) mS(o) mW(+-) mW(o) 0 727.440123 724.776800 1 423.043 421.494 215.760 208.643 2 298.243 297.151 181.704 175.709 4 187.573 186.886 162.013 156.668 The mass of group of four virtual remainders is smaller than the mass of the virtual field of nucleon. This leads to conclusion that the symmetrical decays of the group of the four remainders lead to the Titius-Bode law for the strong interactions. The group of four virtual remainders reaches the d=1 state. There it decays to two identical bosons. One of these components is moving towards the equator of the torus whereas the other is moving in the opposite direction. When the first component reaches the equator of the torus, the other one stops and decays into two identical particles, and so on. In place of the decay, a ‘hole’ appears in the Einstein spacetime. A set of such holes is some ‘tunnel’. The d=4 orbit is the last orbit for strong interactions because on this orbit the remainder decays into photons so strong 20 interactions disappear. We see that there is not in existence a boson having range equal to the B’. There is a probability that the y proton is composed of H + and W(o),d=1 and a probability that 1-y is composed of H o and W(+),d=1. From the Heisenberg uncertainty principle follows that the probabilities y and 1-y, which are associated with the lifetimes of protons in the abovementioned states, are inversely proportional to the relativistic masses of the W pions so from this condition and (31) we have y = mpion(+-)/(mpion(+-) + mpion(o)) = 0.5083856, (36) 1 - y = mpion(o)/(mpion(+-) + mpion(o)) = 0.4916144. (37) There is a probability that the x neutron is composed of H + and W(-),d=1 and a probability that 1-x is composed of H o , resting neutral pion and Z o . The mass of the last particle is mZ(o)=mW(o),d=1-mpion(o) (the pion W(o),d=1 decays because in this state both particles, i.e. the torus and the W(o),d=1 pion, are electrically neutral). Since the W(o),d=1 pion only occurs in the d=1 state and because the mass of resting neutral pion is greater than the mass of Z o (so the neutral pion lives shorter) then x = mpion(o)/mW(-),d=1 = 0.6255371, (38) 1 - x = 0.3744629. (39) The mass of the baryons is equal to the sum of the mass of the components because the binding energy associated with the strong interactions cannot abandon the strong field. The mass of the proton is mproton = (mH(+) + mW(o),d=1)y + (mH(o) + mW(+),d=1)(1 - y) = 938.2725 MeV. (40) The mass of the neutron is mneutron = (mH(+) + mW(-),d=1)x + (mH(o) + mpion(o) + mZ(o))(1 - x) = 939.5378 MeV. (41) The proton magnetic moment in the nuclear magneton is proton/o = mprotony/mH(+) + mproton(1 - y)/mW(+),d=1 = +2.79360. (42) The neutron magnetic moment in the nuclear magneton is neutron/o = mprotonx/mH(+) - mprotonx/mW(-),d=1 = -1.91343. (43) The mean square charge for the proton is = e 2 [y 2 + (1 - y) 2 ]/2 = 0.25e 2 (quark model gives 0.33e 2 ) (44) The mean square charge for the neutron is = e 2 [x 2 + (-x) 2 ]/(2x + 3(1 - x)) = 0.33e 2 (quark model gives 0.22e 2 ), (45) where [2x+3(1-x)] defines the mean number of particles in the neutron. The mean square charge for the nucleon is = [ + ]/2 = 0.29e 2 (quark model gives 0.28e 2 ). (46) 21 Inside baryons are produced particles carrying the fractional electric charges so arithmetic mean of both results should lie inside the interval determined by the experiment (the measured values of the are (0.25, 0.31)e 2 ). We see that it is true. But there is the place for the quarks too - see Chapter titled “Reformulated Quantum Chromodynamics”. Notice that the ratio of the distance of masses between the charged and neutral pions to the mass of an electron is equal to the ratio of the masses of a charged core of baryons H + and Z + , where mZ(+)=mW(+),d=1-mpion(o). This should have some deeper meaning. Assume that the increase in the mass of electrons and Z + boson are realized in the d=0 state because this tunnel has some width resulting from the diameter of the point mass of the virtual H + created on the equator of the torus of the core of baryons. The width of the d=1 tunnel means that the mentioned particles in this tunnel do not move with a speed equal to the c. The relativistic masses of the W pions can be calculated using Einstein’s formula (30). Definition of the coupling constant for the strong-weak interactions sw (the core of baryons is the black hole with respect to the strong interactions i.e. on the equator of torus the spin speed is equal to the sw = GswMm/(csd) = mvd 2 rd/(csd) = vd/c, (47) where Gsw denotes the strong-weak constant, sd is the angular momentum of particle in the d state whereas vd is the speed in the d tunnel. In the Einstein spacetime can appear particles or binary systems of particles having spin equal to 1 because such spin have the components of the Einstein spacetime i.e. the binary systems of neutrinos. For example, for the large loop responsible for the strong interactions is sd=h and vd=c – it leads to sw(large-loop)=1. From formulae (30) and (47) we obtain sw(Z(+),d=0) = vd=0/c = (1 - (mZ(+)/mH(+)) 2 ) 1/2 = 0.993812976. (48) The rp(proton) denotes the radius of the point mass of a proton and the range of the weak interactions of the point mass of a proton because the range of weak interactions of a single neutrino is 3482.87 times bigger than the external radius of its torus (see Chapter “Interactions”) so this radius is much smaller than the radius of the point mass of a proton. Because v 2 =GswmH(+)/r and because the particle Z(+-o),d=0 is in distance r=rp(proton)+A from the centre of torus then from formula (48) we obtain A/(rp(proton) + A) = (vd=0/c) 2 = 1 - (mZ(+)/mH(+)) 2 . (49) Then rp(proton)=0.8710945·10 -17 m. We calculated the sum of the circular mass and the mass of the torus: X=mc(proton)=318.295537 MeV. Notice that the mass of H + is greater than the doubled value of X. This means that the core of a baryon behaves in a different way to the bare electron. To obtain the exact mass of core of baryons, the point mass Y must be Y=424.124493 MeV. We see that the point mass of core of baryons Y is approximately the sum of the X and mass of charged pion and minus one quarter of the mass of the neutral pion (424.124421 MeV). Since on the equator of the point mass the spin speed of the binary systems of neutrinos must be equal to the c then we can calculate the constant for the weak interactions Gw = c 2 rp/Y = 1.0354864·10 27 m 3 s -2 kg -1 . (50) The coupling constant for weak interactions of protons w(proton) can be calculated using the formula-definition w(proton) = GwY 2 /(ch) = 0.0187228615. (51) Y is the mass of the source and the carrier of weak interactions. The distance of mass between X+Y and H + is equal to the binding energy resulting from the weak interactions of the point mass of the core of baryons with the virtual large loops arising at a distance of 2A/3 from the point mass and with the virtual particles arising on the surface of the torus. There are exchanged the weak masses i.e. the volumes filled with a little compressed Einstein spacetime. There arises the virtual H +,- particles and the particles having masses equal to the distance of masses between charged and neutral pions. They arise as 22 virtual pairs so the axes of these dipoles converge on the circular axis of the torus so they were also at a distance of 2A/3 from the point mass. Binding energy, E = mc 2 , is equal to the sum of the mass of these three virtual particles (M = 727.440 + 67.544 + 4.604 = 799.59 MeV) multiplied by the mass of the point mass Y and the Gw and divided by 2A/3. This leads to 14.980 MeV and to the mass of the charged core of baryons that is equal to 727.440123 MeV and this result is consistent with the original mass of the H + . The new electroweak theory Structure of muon and magnetic moment of electron The external radius of the torus of an electron is equal to the Compton wavelength for the bare electron which is rz(electron)=3.8660707·10 -13 m (see formula (17)). From (50) for a point mass Mp we have GwMp = rpc 2 , (52) where rp denotes the range of weak interactions. Since w = GwMpmp/(ch), (53) where mp denotes a mass interacting weakly with the Mp, so w = mprpc/h. (54) To calculate the radius of the point mass of an electron we should divide the point mass of an electron by the mass of Y and extract the cube root of the obtained result and next multiply it by the radius of the point mass of a proton. The radius of the point mass of an electron rp(electron) is rp(electron) = 0.7354103·10 -18 m. (55) The point mass of electron is the half of the bare mass of electron (see formula (18)). The density of the Einstein spacetime inside the point mass of an electron is the same as the point mass of a proton. This means that the speed on the equator of the point mass of an electron cannot be the c. Using the formula c 2 =GwM/rp(electron), we can calculate the virtual or real energy/mass E of neutrinos which should be absorbed by the point mass of electron M=E+mp(electron)=35.806163 MeV. A muon is an electron-like particle i.e. the point mass of a MeV. The point mass of a muon consists of three particles: two energetic neutrinos and the point mass of the contracted electron (the two neutrinos means that the muon is stable). The additional point mass of the contracted electron is outside the circle having the spin speed equal to the c. If we assume that the all three particles have the same mass, then to obtain the mass of free muon the weak binding energy of the point mass of a muon should be 0.498281845+mradiation(muon)/2. The energy lost by a free muon increases the mass of the virtual field. This means that the mass of virtual field of a free muon is greater than the bare mass of muon due to the emitted binding energy and due to the energy lost by the free muon. We can see that mass of muon depends on mass density of point mass of electron and the size of the point mass of the not contracted electron. From (54) we obtain following value for the coupling constant for the electron-muon transformation w(electron-muon) = 9.511082·10 -7 . (56) We see that Xw w(proton)/w(electron-muon) = (M/mp(electron)) 2 = 19,685.3. (57) Because the state of an electron describes the wave function filling the entire Universe and because the torus of an electron is a part of the Einstein spacetime we must take into account the matter and dark energy in our Universe. Dark energy is a sphere filled with binary systems of neutrinos created from the Protoworld. The mass of the dark energy is so many times greater than the baryonic mass of our Universe and how many times greater the bare mass of 23 the proton (it is the core of the proton) is than the mass of the large loop created on the circular axis of the torus of the proton – see Chapter titled “New Cosmology”. The ratio of these values is =10.769805. The ratio of the energy of matter (visible and dark) and dark energy to the energy of matter is +1. In understanding that the Y is the carrier of the weak interactions of electrons, for the coupling constant of the weak electron-proton interactions we obtain: ’w(electron-proton)≈Gw(Y-gw)mp(electron)/(ch)=1.119·10 -5 , where gw is the weak binding energy of the Y and mp(electron) i.e. gw=GwYmp(electron)/rp(electron)=3.0229 MeV. There can be virtual or real mass of Y. The real mass Y appears when the electron transforms into an antiproton. A value close to the +1, we obtain for the ratio of the mass Y-gw to the mass M=35.806163. This similarity leads to conclusion that the electron-muon transformation (due to the weak interactions) is associated with the electron-matter interactions whereas the electron-proton weak interactions are associated with the electron-matter-dark-energy weak interactions. The exact value for the coupling constant of the weak interactions of an electron placed in the matter and dark energy is ’w(electron-proton) = ( + 1)w(electron-muon) = 1.11943581·10 -5 . (58) The mass of a resting electron is equal to the mass of a bare electron and the electromagnetic and weak masses resulting from the interaction of the components of virtual electron-positron pairs (it is the radiation mass of pairs) plus the weak mass resulting from the interaction of the point mass with the radiation mass of the virtual pairs. Virtual pairs behave as if they were in a distance equal to 2rz(torus)/3 from the point mass. We neglect the pairelectron electromagnetic interactions because the pairs are electrically neutral. The formula for the coupling constants of the gravitational, weak and strong interactions is as follows: i = GiMm/(ch). (59) The energy of the interaction defines the formula Ei = GiMm/r, (60) then from (59) and (60) we obtain Ei = ich/r = mic 2 . (61) On the other hand the Compton wavelength of the bare particle is equal to the external radius of a torus and is defined by the formula = rz(torus) = h/(mbarec), (62) then from (61) and (62) we obtain mi = imbare/(r/rz(torus)). (63) Most often the point mass of an electron appears near the point mass of a nucleons because there is a higher mass density of the Einstein spacetime. From (58) we have ’w(electron-proton) = 1.11943581·10 -5 . (64) As a result, we can introduce the symbol = em/(’w(electron-proton) + em), (65) where denotes the mass fraction in the bare mass of the electron that can interact electromagnetically, whereas 1- denotes the mass fraction in the bare mass of the electron that can interact weakly. Whereas the electromagnetic mass of an bare electron is equal to its weak mass. Since the distance between the constituents of a virtual pair is equal to the length of the equator of a torus (because such is the length of the virtual photons) so the ratio of the radiation mass (created by the virtual pairs) to the bare mass of electron is = em/2 + (1 - )’w(electron-proton)/2 = 0.00115963354. (66) 24 The ratio of the total mass of an electron to its bare mass, which is equal to the ratio of the magnetic moment of the bare electron to the Bohr magneton for the electron, describes the formula = 1 + + ’w(electron-proton)/(2/3). (67) Due to the virtual pairs annihilations, in the Einstein spacetime are produced holes decreasing mass density of the radiation field. Since for virtual electron the product mbare(electron)’w(electron-proton) is about 7.2·10 -7 times smaller than the mp(proton)w(proton) for proton so we obtain that the final result is lower than it follows from (67) by the value Δεelectron = ( - 1)·7.2·10 -7 = 8.344077·10 -10 . (68) Then we obtain following value ’ = ε – Δεelectron = 1.0011596521735 (69) Summary The Everlasting Theory is the lacking part and foundations of the ultimate theory. The phase transitions of the Newtonian spacetime lead to the physical constants, to an atomlike structure of baryons and new cosmology. My theory is very simple because it is based on only seven parameters and three formulae – two formulae are associated with the phase transitions and one formula is associated with the Titius-Bode law for strong interactions – and concerns the stable stages of bare particles. This theory is an extension to Einstein’s theories of relativity and of the correct part of the quantum theory. Gravity needs inflexible neutrinos. The G then has the same value for all masses. Newtonian spacetime is classical and leads to the correct part of the quantum theory. The Everlasting Theory provides very good results. The exotic particles and tau neutrinos are not in existence. Two of the seven parameters, i.e. the inertial mass density of tachyons and the dynamic viscosity, do not change with time. The other five can have different values in different cosmic bulbs which walls are composed of the pieces of space packed to maximum. Then, the walls are hermetic for the Newtonian spacetime. The values of the seven parameters in our bulb lead to the fundamental laws of conservation of energy and spin, and to the principle of relativity. Today, of course in a cosmic scale, almost all closed strings in our bulb are inside the masses so there are only two spacetimes leading to the gravity and electromagnetism. All particles greater than the neutrino are built of the very stable neutrinos. The lacking dark energy is inside the neutrinos because they are composed of the closed strings moving with superluminal speeds. Exchanges of the binary systems of the closed strings are responsible for the entanglement of particles. There can be infinite number of the cosmic bulbs. Three conditions must be satisfied in order to create life. First, the mass densities of the spacetimes must be specific the creations of the stable objects were possible. The laws of physics should not vary. Next, the Protoworld must have strictly determined the mass of the Protoworldneutrino transition was possible. Because universes arise as the universeantiuniverse pairs then the distance between the constituents of a pair must be sufficiently distant. 25 Table 2 Theoretical results Physical quantity Theoretical value* Gravitational constant 6.6740007 E-11 m 3 /(kg s 2 ) Half-integral spin (1.054571548 E-34)/2 Js Speed of light 2.99792458 E+8 m/s Electric charge 1.60217642 E-19 C Mass of electron 0.510998906 MeV Fine-structure constant for low energies 1/137.036001 Mass of bound neutral pion 134.96608 MeV Mass of free neutral pion 134.97674 MeV Mass of charged pion 139.57041 MeV Radius of closed string 0.94424045 E-45 m Linear speed of closed string 0.7269253 E+68 m/s Mass of closed string 2.3400784 E-87 kg External radius of neutrino 1.1184555 E-35 m Mass of neutrino 3.3349306 E-67 kg Mass of Protoworld 1.961 E+52 kg External radius of Protoworld 287 million light-years Mass of the Universe 1.821 E+51 kg Radius of the early Universe loop 191 million light-years External radius of torus of nucleon 0.697442473 fm Constant K 0.7896685548 E+10 Binding energy of two large loops 0.12273989 MeV *E-15=10 -15 26 Table 2a Theoretical results Physical quantity Theoretical value Mass of large loop 67.5444107 MeV Mass of torus of core of baryons 318.295537 MeV Point mass of the nucleon 424.124493 MeV Range of weak interactions of the proton 8.710945 E-18 m Weak binding energy of core of baryons 14.980 MeV Mass of charged core of baryons 727.440123 MeV Ratio of mass of core of baryons to mass of large loop 10.769805 Mass of electron to mass of bare electron 1.0011596521735 Mass of bound muon 105.666974 MeV Mass of free muon 105.656314 MeV The A/B in the Titius-Bode law for strong interactions 1.38977193 Mass of proton 938.2725 MeV Mass of neutron 939.5378 MeV Proton magnetic moment in nuclear magneton +2.79360 Neutron magnetic moment in nuclear magneton -1.91343 Radius of last tunnel for strong interactions 2.7048 fm Mean square charge for nucleon 0.29 Mean square charge for proton 0.25 Mean square charge for neutron 0.33 External radius of torus of electron 386.607 fm Range of weak interactions of electron 0.7354103 E-18 m Weak constant 1.0354864E+27 m 3 /(kg s 2 ) Electromagnetic constant for electrons 2.7802527E+32 m 3 /(kg s 2 ) Coupling constant for weak interactions of the proton 0.0187228615 Coupling constant for the electron-proton weak interaction 1.11943581 E-5 Coupling constant for the electron-muon weak 0.9511082 E-6 interaction Coupling constant for strong-weak interactions inside the baryons d=0: 0.993813 d=1: 0.762594 d=2: 0.640304 References [1] K. Nakamura et al. (Particle Data Group), J. Phys. G 37, 075021 (2010) [2] M W Zwierlein, J R Abo-Shaeer, A Schirotzek, C H Schunck, and W Ketterle; Vortices and superfluidity in a strongly interacting Fermi gas; Nature 435, 1047-1051 (2005). 27 Interactions Here I show mathematical and physical relations between different interactions. Name of source St at es Types of interactions and phase spaces Table 3a Interactions What produces gradients in fields? Name of interaction Tachyons 1 Fundamental-direct collision Range=0.5·10 -64 m Closed 2 Tachyon jet (line of string gravitational field) Range=2·10 36 m Neutrino 4 Divergently moving tachyon jets; they produce the Gravitational Range=2·10 36 m Core of baryon The binary systems of closed strings a neutrino consists of suck up Newtonian spacetime from some volume Exchanged small loops produced on the equator of a neutrino and composed of the superluminal binary systems of closed strings 2 Divergently moving binary system of neutrinos fluxes (the binary systems are the carriers of massless photons and gluons) produced in annihilations of electron-positron pairs appearing in Einstein spacetime (the pairs are produced by the virtual or real photons) Exchanged volumes filled with additional binary systems of neutrinos Exchanged large single loops composed of carriers of gluons (in mesons) or binary systems of loops (between baryons) appearing on circular axis of torus; the 8 different carriers of gluons are the Feynman partons; the three internal helicities of a carrier of gluons cause that the gluons are the three-coloured particles; due to the internal helicities of the core of baryons and the particles produced inside it, we cannot neglect the internal structure of the carriers of gluons in the strong fields; outside strong field the gluons become the photons Weak Range of confinement= =3482.87R(neutrino) Entanglement Range=size of the Universe Electromagnetic Range=2·10 36 m Weak Range for proton= =0.871·10 -17 m Range for electron= =0.735·10 -18 m Strong Range=2.92·10 -15 m (circumference of the large loop) Protoworld 2 Divergently moving tachyon jets Gravitational Range=2·10 36 m Charge (source) (gravitational mass i.e. composed of weak bi-dipoles) Weak (torus of neutrino) Strong (torus of core of baryons) Electric (torus of core of baryons; only outside the strong field due to the transition of gluons into photons) Electric (torus of electron i.e. the polarized part of the Einstein spacetime) Superstrong (cosmic torus of protoworld) Kasner solution and BKL model 28 Table 3b Interactions Emitted massless rotational energy + Gravitons (they behave as two entangled photons) Mass carrier of emitted massless rotational energy Weak bi-dipoles (two binary systems of neutrinos i.e. two weak dipoles; spin = 2; today the classical gravity) + Entanglons Binary systems of closed strings i.e. half-jet dipoles (spin = 1) Quantum particles Weak bi-dipoles (only in the era of inflation; Quantum Gravity) Binary systems of closed strings (only in the era of inflation) Entanglons + Gluons Entangled weak dipoles Weak dipoles (only in the era of inflation) Gluons + Photons Entangled weak dipoles Photons – Photons Entangled weak dipoles Electrons and Electron-positron pairs + – – – Table 4 Phase spaces Stable object Co-ordinates and quantities needed to describe position, shape and motions Tachyon 6 (5 + time) Closed string 10 (9 + time) Closed string-antistring pair Neutrino Neutrino-antineutrino pair 26 [9(large torus) + 7(small tori on the surface of the large torus) + 9(small tori on the surface of the point mass) + time] 58 (9 + 23 + 25 + time) Core of baryons Electron Protoworld 122 (9 + 55 + 57 + time) We see that for stable objects we have N=(d-1)·8+2, where N denotes the numbers of needed co-ordinates and quantities whereas d=0, 1, 2, 4, 8, 16. Then for the N we obtain -6 (the Newtonian spacetime is the imaginary spacetime), 2 (for rotating spin), 10, 26, 58 and 122. For example, to describe the position, shape and motions of a closed string we need three coordinates, two radii, one spin speed, one angular speed associated with the internal helicity and the time associated with the linear speed. To describe the rotation of the spin we additionally need two angular speeds. This means that the phase space of a closed string has ten elements whereas the string-antistring pair has eleven. We can see that we can replace the higher spatial dimensions (i.e. the more than three) for the enlarged phase spaces. 29 The weak interactions of baryons lead to the fundamental force Now, verify whether the mass Y leads to the stable closed strings. Gravitational mass is directly proportional to the number of closed strings a mass consists of. Then using the following formula we can calculate the number of closed strings Ncs that the point mass of the core of baryons is composed of Ncs = Y/m1. (70) Assume that the radius of the point mass has a strictly determined value because the closed strings suck up the Newtonian spacetime from the interior of it. To calculate the volume of the spacetime Vs a closed string sucks it up we can use Vs = 4πrp(proton) 3 /(3Ncs). (71) Due to the shape of closed string, inside it pressure of the Newtonian spacetime is a little lower so the sucked up spacetime separates from closed string on the internal equator. There is produced tachyon jet. The sucked up tachyons have the radial speeds equal to the linear speeds of the tachyons. Volume of one separated portion of the thickened spacetime is Vcs = 2πrt πrt 2 . (72) In knowing the inertial mass density of the Newtonian spacetime ρN, we can calculate the mass density of the thickened Newtonian spacetime ρts ρts = ρNVs/Vcs. (73) Centripetal force acting on one tachyon depends on the pressure difference between the interior and exterior of the closed string. Because ρts>>ρN then the centripetal force Fcpt is Fcpt = πrt 2 ρtsvt 2 /2. (74) Next, compare the obtained centripetal force with the centrifugal force Fcft acting on the tachyons that a closed string is composed of Fcft = mtvt 2 /r1. (75) For both forces we obtain about 2.2·10 133 N. It means that closed strings are stable particles. In knowing the point mass of proton Y and applying formula (49), we can calculate the mass density of the point mass. The mass density of the Einstein spacetime is the parameter then the ratio of the mass densities of the Einstein spacetime and the point mass Y is 40,363. The formula (15) defines the mean distance between the neutrino-antineutrino pairs in the Einstein spacetime. The mean distance between the neutrino-antineutrino pairs in the point mass Y is (40,363/(40,363 + 1)) 1/3 = 0.9999917 times smaller than in the Einstein spacetime. This means that the mean distance is 3482.87rneutrino. It is the range of the weak interactions of a single neutrino i.e. the range of the confinement of the carriers of gluons and photons. Homogeneous description of all interactions Constants of interactions are directly proportional to the inertial mass densities of fields carrying the interactions. The following formula defines the coupling constants of all interactions αi = GiMimi/(ch), (76) where Mi defines the sum of the mass of the sources of interaction being in touch plus the mass of the component of the field whereas mi defines the mass of the carrier of interactions. We know that the neutral pion is a binary system of large loops composed of the binary systems of neutrinos. This means that inside the neutral pion the binary systems of neutrinos are exchanged whereas between the neutral pions the large loops are exchanged. We can neglect the mass of the binary system of neutrinos in comparison to the mass of the neutral pion. On the other hand, from (47) it follows that coupling constant for the large loop is unitary because its spin speed is equal to the c. Due to the formula E = mc 2 , the massless energy frozen inside a pion is equal to its mass. Then for strongly interacting neutral pion is S = GS(2mpion(0))(mpion(0)/2)/(ch) = v/c = 1, (77) 30 where v denotes the spin speed of the large loop. Then the constant of the strong interactions is GS=5.46147·10 29 m 3 s -2 kg -1 . Coupling constant for strongly interacting proton, for low energies, is S pp = GS(2mproton + mpion(0)/2)mpion(0) /(ch) =14.4038. (78) In a relativistic version, the GS is constant. When we accelerate a baryon, then there decreases the spin speed of large loop so its mass also decreases: E(loop)2πr(loop)/v(spin-speed-of-loop)=h. This means that the mass of the carrier decreases whereas when nucleons collide, the number of the sources increases. These conditions lead to the conclusion that the value of the running coupling decreases when energy increases (see Paragraph titled “Running couplings”). The other constants of interactions for low energies i.e. the gravitational constant G, electromagnetic constant for electrons Gem and weak constant Gw I calculated before – see respectively formulae (11), (19) and (50). Running couplings We can calculate the coupling constants from the formula (76). Using the formulae (11) and (12) we know that the constants of interactions depends linearly on the mass densities of appropriate fields. Strong and strong-weak interactions of colliding nucleons The formula (78) defines coupling constant for two strongly interacting non-relativistic protons. The scale in my theory is as follows. When energetic nucleons collide the Titius- Bode orbits for strong interactions are destroyed i.e. the strong field. This means that colliding nucleons interact due to the weak masses of the large loops responsible for strong interactions. The strong-weak interactions of the colliding nucleons depend on the properties of the pions i.e. of the binary systems of large loops. The weak mass of virtual particles produced by binary system of large loops is f=2αw(proton)=1/26.7053=0.0374457 times smaller than rest mass of the large loop and this value is the scale/factor for the running coupling of the strongweak interactions for colliding nucleons. This means that the running constant of the strongweak interactions for colliding nucleons αsw defines the following formula αsw = fαs, (79) where f=2αw(proton). When the energy of a proton increases then, due to the uncertainty principle, the mass of components of fields decreases (energy-of-component-of-field multiplied by spin-period is h; the spin-period increases when the energy of the proton increases). We can calculate the mass of the carrier msw using the following formula (there are calculations analogous to the formulae (103)-(105)) msw = mpion(o)β, (80) where β = (1 – v 2 /c 2 ) 1/2 , (81) where v denotes the relativistic speed of the nucleon. When energy of collision is E = nmproton then β = 1/n. When the energy of colliding protons increases, more sources interacting strongly appear. The sources are in contact because there is a liquid-like substance composed of the cores of baryons. There is the destruction of the atom-like structure of baryons. This means that a colliding nucleon and the new sources behave as one source. Strong-weak interactions are associated with the torus (the mass of the torus is X=318.3 MeV) whereas the mass of the core is mH(+)=727.44 MeV) then the mass of the source Msw for colliding proton is Msw = 2mproton + mpion(o)β/2 + X·\integer-of\{(1/β - 1)2mproton/(2mH(+))}. (82) Due to the frozen energy, one charged torus is associated with energy 2mH(+). 31 This means that there are separated fragments of the curve representing the running coupling for strong-weak interactions of colliding nucleons. When we neglect the \integer-of\ in the formula (82) then from (76), (78) and the formulae (79)-(82), we obtain the following function for strong-weak running-coupling αsw = auβ 2 + buβ + cu, (83) au = 0.0187229 = αw(proton), bu = 0.4067, cu = 0.1139. This curve starts from 1.67 GeV and leads through the upper limits of the sectors representing the successive ‘jumps’ of the running coupling. The ‘jumps’ appear for the following energies En[GeV] = mproton + n·mH(+), (84) where n=2, 3, 4, 5,….. For the n=1 we observe the drop in value of the running constant from 8.113 to 0.349. You can see in Chapter “Reformulated Quantum Chromodynamics” how the mass of the charm quark defines the energy E1. The widths of the ‘jumps’ can be calculated using the following formula Δαsw = fGsΔMm/ch = djβ, (85) where dj=0.0883096 whereas ΔM=X and m=mpion(o)β and should be expressed in kilograms. For the curve leading through the lower limits of the sectors representing the successive ‘jumps’ we obtain αsw = alβ 2 + blβ + cl, (86) al = 0.01872, bl = 0.3184, cl = cu = 0.1139. 32 We can see that there is an asymptote for αsw=0.1139. This means that there is asymptotic packing of the cores of baryons, not asymptotic freedom of the quarks and gluons. The asymptotic freedom leads for high energies to gas-like plasma whereas the asymptotic packing leads to liquid-like plasma and is consistent with experimental data. It proves that baryons do not consist of point quarks. This asymptotic packing suggests that baryons have a massive core which is what I propose and support in my theory. We can also see from my theory the beta function is negative for the separated fragments, is infinite for the jumps and practically equal to zero for energies close to the maximum energy of proton (about 18 TeV). A closer experiment should show the internal structure of the curve for running coupling of the strong-weak interactions for colliding nucleons. The internal structure of the core of baryons should be overcome when the surface of the point mass attains the torus i.e. when the radius of the point mass increases 1/f=26.71 times. It is when the mass of the proton increases (1/f) 3 =1.9·10 4 times i.e. for energy about 18 TeV. Above this energy, the proton loses the surplus energy. The mass of the region of the Einstein spacetime inside the non-relativistic point mass in the centre of the core of baryons is in approximation 17.1 TeV so probably there is in existence the Type W boson carrying such mass. Weak interactions Since Gw=const. then from formula (51) we obtain that coupling constant for weak interactions of nucleons does not depend on their energy because the point masses Y of the cores of baryons do not adhere in the liquid-like substance. Electromagnetic interactions Within the liquid-like plasma (it consists of the cores of baryons and antibaryons; inside such plasma the d=1, 2, 4 states are destroyed) in the d=0 states, i.e. on equators of the cores of baryons, the contracted electron-positron pairs appears. The mass of contracted pair is 33 xm=9.0104369 times greater than the mass of the electron-positron pair (see discussion below formula (23)). From formula αem=Gemm 2 electron/ch, we obtain that at the high-energy collisions of nucleons the coupling constant for the electromagnetic interactions of the contracted electrons is xm 2 =81.18797 times greater than the fine-structure constant. There appears one more contracted pair per each new core-anticore pair. It leads to conclusion that probability of the electron-positron pair creation is Z7=727.440/0.5109989=1423.6 times higher than the contracted pair. This means that the value of the coupling constant for the electromagnetic interactions inside the liquid-like plasma should be αem(xm 2 + Z7)/(1 + Z7) = 1/129.7. (87) Gravitational interactions Closed strings a neutrino consists of transform the chaotic motions of tachyons into the divergently moving tachyons. Due to the dynamic viscosity of the closed strings, the mass density of the Newtonian spacetime rapidly increases only on the surface of the closed string (about 10 82 times – see (11) and (73)). Since torus of neutrino produces about 6·10 19 divergent tachyon jets then for distances greater than about 3.9·10 -32 m (this distance is the range of the weak interactions of single neutrino), the gravitational constant is constant. Due to the density of the Newtonian spacetime and tremendous pressure (about 10 180 Pa), the neutrino stretches the gravitational field to distance 2·10 36 m. The total cross section of all tachyons in volume of a rectangular prism 1m·1m·2·10 36 m is 1m 2 so all divergently moving tachyons are scattered. The neutrinos are the ‘carriers’ of the gravitational constant. There are only 4 different neutrinos (the electron neutrino and its antineutrino and the muon neutrino and its antineutrino). The graviton could be the rotational energy (its mass is zero) of particle composed of the four different neutrinos in such way that the carrier of graviton is the binary system of binary systems of neutrinos with parallel spins, i.e. spin of carrier of gravitational energy is 2. The rotating neutrino bi-dipole behaves as two entangled photons, not as graviton. Gravitational energy is emitted via the flows in the Einstein spacetime composed of the nonrotating-spin neutrino bi-dipoles. The neutrinos, binary systems of neutrinos, bi-dipoles of neutrinos, and so on, produce the gradients in the Newtonian spacetime that is impressed on the Einstein spacetime too. We can describe the gravity via such gradients. When time of an interaction is longer than about 10 -60 s then the Newtonian spacetime looks for interacting particleas a continuum and we can apply the Einstein equations and the Noether theorem. Such continuum leads to the symmetries and the laws of conservation. Since spin of the neutrino bi-dipoles is 2 whereas of the neutrinos 1/2 then the gravity leads to conclusion that the neutrinos have only two flavours i.e. there are in existence only four different neutrinos. The tau neutrinos are not in existence. Fine-structure constant for quasars Due to the internal helicity of the Protoworld and the cosmic loop (see Chapter “New Cosmology”), there was produced jet in the Einstein spacetime. The jet and the protuberances on surface of our early Universe led to high redshift for quasars. The jet and the protuberances produced regions in the Einstein spacetime with increased or decreased mass density in comparison with its mean mass density. The spatial dependence of the fine structure constant arose just at the beginning of the ‘soft’ big bang. Its dipolar part arose due to the jet. The monopole part is due to the protuberances. The total spatial dependence should be positive because in the deep past the thickened Einstein spacetime had higher mass density than today. The fine-structure constant is proportional to the mass density of the Einstein spacetime to the power of five third (see formulae (15)-(21)) whereas the mass of the electron-positron 34 pairs, produced by the photons appearing in the decays of the neutral pions, is proportional to the mass density of the Einstein spacetime to the power of three (see formulae (15)-(18)). The production of the neutral pions and next the electron-positron pairs and next their annihilations decreased the mass density of the Einstein spacetime. This means that the changes in the mass of the pairs should not exceed Δm/m=mneutral-pion/mnucleon≈0.144. Such maximum changes are possible due to following changes of the density of the Einstein spacetime ΔρE/ρE≈±3.0·10 -3 . Such changes were possible only just at the beginning of the ‘soft’ big bang. We see that the maximum changes of the fine-structure constant should not exceed Δαem/αem≈±6.2·10 -5 . This means that all measurements for the quasars with high redshift (in the Everlasting Theory the high redshift begins from z=0.6415), i.e. from the Keck telescope and the ESO Very Large Telescope, can be correct [1]. The Everlasting Theory leads also to conclusion that we should not observe spatial dependences of the gravitational constant, of the speed of light in ‘vacuum’ and of spin because these physical constants do not depend on mass density of the Einstein spacetime. These physical constants depend on the properties of the more fundamental spacetime i.e. the Newtonian spacetime composed of the structureless tachyons that have a positive mass. Suppose that the binary systems of neutrinos inside the point masses of particles behave similarly to ionized gas (at the assumption of the gas) in the stars. The theory of such stars says that the radiation pressure p is directly in proportion to the absolute temperature T to the power of four p T 4 . (88) The analogous relation ties the total energy emitted by a black body with its temperature. This theory also suggests that the absolute temperature of a star is directly in proportion to its mass. From it follows that total energy emitted by a star is directly proportional to its mass to the power of four. On the other hand, the maximum energy of the created virtual particle, in the surrounding of a point mass, is equal to the point mass multiplied by 2. However, because the Heisenberg uncertainty principle results that the lifetime of a particle is inversely proportional to its energy we obtain that the lifetime of a point mass is inversely in proportion to the mass to the power of four t 1/m 4 . (89) The same relation concerns circular masses. From the uncertainty principle and formula (61) we obtain t 1/α. (90) On the basis of the formulae (89) and (90), we can calculate the lifetimes of particles. The time the large loop reaches the equator of torus is tstrong-minimal = tem-minimal(proton) = (A/3)/c = 0.7755 . 10 -24 s. (91) This is the minimum time of the strong interactions and is equal to the time needed for a photon to cover the distance between the ‘electric charge’, placed on the circular axis, and the equator of torus. The tau in weak interaction behaves in the same way as the electron in the electromagnetic interaction (see formula (136)). As a result, we have: tw(tau)/tem-minimal(proton) = (mc(proton)/mc(electron)) 4 = 2.42·10 12 , (92) where the lifetime of tau is tw(tau)=1.88·10 -12 s. The weak mass of tau is about 1782 MeV. The weak interactions are responsible for the decay of a muon and mp(muon)=mmuon/2 so the lifetime of a muon is tw(muon) = tw(tau)(mp(tau)/mp(muon)) 4 = 2.44·10 -6 s. (93) 35 The weak interactions are responsible for the decay of the hyperons and because of these interactions they behave as a nucleon, whereas the muon behaves as an electron, so the lifetime of the hyperons are equal to tw(hyperons) = tw(muon)/(w(proton)/w(electron-muon)) = 1.24·10 -10 s. (94) The weak interactions are responsible for the beta decay of a neutron, however, in such a decay a neutron behaves like an electron (the electron appears in this decay), whereas it is impossible for the proton to decay as such the lifetime of neutron is: tw(neutron) = tw(hyperons)(mp(proton)/mp(electron)) 4 = 946 s. (95) The lifetime of the charm hyperon c( is: tw((2260)) = tw(hyperons)(mp(proton)/mp((2260))) 4 = 6.5·10 -13 s, (96) where mp((2260))= m(2260)-m(1115)+Y=1573 MeV. The lifetime of the large loop created on the circular axis of the torus of the nucleon can be calculated using the uncertainty principle ELL·tLL=h, where mLL=67.5444119 MeV. The neutral pion decays in respect of the weak interaction. The weak mass of virtual particles produced by the large loop we can calculate using the formula mLL(weak) = mLL·w(proton) = 1.26462 MeV. This is the distance of masses between a neutron and a proton. Consequently the lifetime of the neutral pion is: tpion(o) = tLL(mLL/mLL(weak)) 4 = 0.793·10 -16 s. (97) The charged pion decays because of the electromagnetic interaction of the weak mass, therefore: tpion(+-) = tpion(o)(1/em) 4 = 2.78·10 -8 s. (98) Four-neutrino symmetry Entanglement of neutrinos is due to the exchanges of the binary systems of closed strings. Particles composed of the four different neutrinos have the resultant weak charge equal to zero. Furthermore, the resultant internal helicity and spin also can be equal to zero. As a result, the neutral object should be built of the 4n different neutrinos where the n denotes the integers. In order for the interactions of elements where an object composed were saturated the number of the elements in this object must be equal to the number of neutrinos in each element. Therefore, if the smaller object contains x neutrinos the larger object must contain x 2 neutrinos (4 d , where d = 1, 2, 4, 8, 16, 32…). The flat structures of the neutral pions should, therefore, contain 4, 16, 256, etc. neutrinos. In the surroundings of torus of a real particle, there appear virtual particles and the total mass of them cannot be greater than the mass of the real particle multiplied by 2 and increased by the emitted binding energy. It is easily noticeable that within a nucleon there can be created at the same time at most 6 virtual pionantipion pairs. These pairs must differ by the number of the neutrinos because the neutrinos are the fermions. This means that, for example, the typical gravitational black hole built of the neutrons (i.e. photons on the equator of the typical black hole are moving with the speed c; see formulae (99)-(101)) can interact with 2·4 32 other typical black holes because at most such a number of the neutrinos, having weak charges, contain a virtual pion-antipion pair created inside the neutron. Therefore, in our early Universe there were around 3.69·10 19 typical black holes. The typical black hole built of neutrons (i.e. the biggest neutron star) has mass about 25 times greater than that of the sun. The total mass of all of these biggest neutron stars was equal to about 1.821·10 51 kg. Such mass has the baryonic matter (visible and dark) in our Universe. The typical early massive galaxy, which I call the protogalaxy, contained 4 16 typical black holes. There were 2·4 16 protogalaxies. Associations containing 4, 16, 256, etc. binary systems of massive galaxies are a flattened spheroid-like structures. Notice that the above described rules lead to the four-neutrino symmetry. This symmetry is obligatory for also following a sequence of numbers: 64 = (2·4 1 ) 2 (for example a meson built of four groups, each 36 group built of four pions and each pion built of 4 neutrinos), 64 2 , 64 4 , etc. Such associations are the chain-like structures. Our Universe appeared analogically as a large loop inside the torus of baryons but we must replace the neutrinos in binary systems of neutrinos with the biggest neutron stars. The objects which contained most of the binary systems of neutrinos (they are an analog to our early Universe), created in the nuclear transformations, decayed to ‘galaxies’ (which carry energy of entangled photons) similarly as our early Universe decayed to the massive galaxies. Each such object decayed to the 2·4 16 photon galaxies. It leads to 300 million photons in cubic meter in our Universe (see Chapter “New Cosmology”). Some results associated with the constant K Calculate the mass of a typical gravitational neutron black hole. On the equator of such a black hole the neutrons are moving with a speed equal to the c but such an object is ballshaped because inside it the field composed of the binary systems of neutrinos rotates with the same angular speed – it means that the black hole is in rest in relation to the Einstein spacetime. The nucleons in such an object are placed in vertices of cubes and the lattice constant is equal to ac=(A+4B)/2 1/2 (see formula (183)). The radius of such a black hole is rbh and the mass mbh that satisfies the following formula: rbh = Gmbh/c 2 . (99) If N1 denotes the number of neutrons in such black hole then 4rbh 3 /3 = N1ac 3 , (100) and mbh = N1mneutron. (101) Solving the set of formulae (99)-(101) we get N1=2.946·10 58 , mbh=4.935·10 31 kg i.e. about 25 masses of the sun, rbh=3.664·10 4 m i.e. about 37 km. On the other hand the four-neutrino symmetry follows that the early Universe contained 2·4 32 gravitational neutron black holes. This means that the baryonic mass of the Universe is 1.821·10 51 kg. The baryonic mass in our Universe should be K 8 times greater than the rest mass of the large loop (mLL=67.5444 MeV), which means that is satisfied using the following: mLLK 8 = 2·4 32 N1mneutron. (102) The question as to why the value of the dark energy mass density calculated within the quantum theory is approximately 10 120 times greater than that measured can be answered as follows. We know that the spin of stable particles defines the expression mvr. Knowing the natural speed of the closed strings, we can then calculate the internal energy of the neutrino. The mneutrinov 2 plus the rest energy mneutrinoc 2 is equal to the calculated rest mass of the Protoworld (which is equal to the msc 2 where ms=1.961·10 52 kg). This means that there is a possibility of the Protoworldneutrino transition that is the reason why our Universe exited from a black hole state. It also means that the measured energy of a non-rotating-spin neutrino should be K 12 =0.59·10 119 times smaller than the energy (not mass) frozen inside the neutrino. The Protoworldneutrino transition leads to the creation of a sphere filled with the surplus binary systems of neutrinos, which is the dark energy. The gravitational field propagates with a speed equal to 2K 9 c/3 i.e. 8·10 88 times higher than the c. The properties of Newtonian and Einstein spacetimes lead to relativistic mass Inertial and gravitational mass of a particle I define as directly proportional to the number of all closed strings (or to the total volume of all closed strings) which the particle consists of. It also concerns the relativistic mass. The mean speed of bound and free tachyons cannot 37 change, therefore, the spin speed of an accelerated particle decreases. It causes the pressure inside the particle to also decrease and the particle absorbs the free binary systems of neutrinos, composed of the closed strings, from the Einstein spacetime. The Einstein formula E=mc 2 is obligatory for such mechanism for particles composed of the binary systems of neutrinos. Generally, the mass and energy do not have the same origin. The unsolved basic problem associated with spin is as follows. What spontaneous phenomena lead to the law of conservation of spin? Fluctuations of spacetime and fields cause compressions and rarefactions to arise. To extend the lifetime of a compression the pressure inside it must decrease. Because the mean speed of particles inside the compression cannot change then only the creation of a vortex will cause a reduction in the pressure. When we accelerate a vortex then its spin speed decreases which as a result also causes the pressure to also decrease. This means that to increase the pressure, the density of the Einstein spacetime inside the vortex must also increase. When we accelerate the proton for example, the spin speed of it must then decrease because the resultant speeds of the components of which the proton is composed of cannot change. This causes the pressure inside the proton to decrease and the additional energy accumulated in the Einstein spacetime flows into the proton and transforms it into a vortex in such a way that the spin is always halfintegral. When we accelerate some particles, the spin of the torus must be parallel or antiparallel to the linear velocity. Then spin of the particle does not change. This means that the spin angular velocity is always parallel or antiparallel to the relativistic velocity. When we accelerate some particles, for example, protons (the spin speed of the binary systems of the neutrinos on the equator of the resting torus will be equal to the c), therefore, the spin speed of the torus decreases. This is because in spacetimes and inside particles the law of conservation of energy is obligatory. In this case, the total energy is conserved of the binary systems of neutrinos that the torus is composed of. Rotations of the spin vectors of the binary systems of neutrinos of which the torus is built of, are impossible because electric charge must also be conserved (all spins of the binary systems of neutrinos the torus is built of must be perpendicular to its surface). Because the mean spin velocity of the proton v(spin) is perpendicular to the relativistic velocity of the proton v(relativistic) then binary systems of neutrinos placed on the equator have: nv 2 (spin) + nv 2 (relativistic) = nc 2 , (103) where the letter n denotes the number of binary systems of neutrinos within a relativistic proton. Because it is obligatory that the law of conservation of spin exists then for binary systems of neutrinos placed on the equator (similarly for all other binary systems of neutrinos) we have: Nnc = nv(spin), (104) where Nn denotes the number of binary systems of neutrinos in a resting proton. The size of the torus also cannot change because the spin and charge are continuously not changing. Transformations of a very simple nature lead to following formula: n = Nn/(1 - v 2 /c 2 ) 1/2 . (105) Since the relativistic mass is directly proportional to the n whereas the rest mass to Nn, we subsequently obtained the very well known Einstein formula. We can see that this formula is correct only when: -the half-integral spin is associated with the torus having a surface similar to the Ketterle surface for a strongly interacting gas, -there is an obligatory laws of conservation of spin and energy. This means that a relativistic proton is built up of more binary systems of neutrinos i.e. the thickness of the surface of torus is greater – next are created layers built up of the same 38 number of binary systems of neutrinos because the number of lines of electric forces, created by the torus, cannot change over time. As the point mass must be about 4/3 greater than the mass of torus this mass also increases when we accelerate a proton. The neutrinos do not have a relativistic mass because the density of the field composed of the free binary systems of closed strings is practically equal to zero - we do not observe interactions associated with such field. Characteristic total cross sections for N-N and π –N scattering In knowing the internal structure of particles, we can calculate the coupling constants for interactions and define what is needed in the calculations for scattering potentials. Sometimes the calculations are very simple, for example, in proton-proton total cross sections. The resting torus is composed of one layer of the binary systems of neutrinos and they are at a distance of about 2πrneutrino. This means that during the penetration of the tori of the protons the target consists of by moving protons is possible. The range of strong interactions for a resting proton is a little greater than the radius of the last tunnel (A+4B=2.7048 fm) and is equal to the circumference of the large loop (4πA/3=2.9215 fm). In fact, the range is slightly greater because the opened loop is tangential to the circular axis – the correct value being drange-strong=2.958 fm. To neglect the cross section resulting from the electromagnetic interactions of nucleons they should be at a distance smaller than A+4B. The nucleons in a beam and target have a tendency to collect in vertices of squares having a diagonal equal to A+4B. The exchanged pions are most often found in the centres of the squares. The volumetric binding energy for such nucleons is 14.952 MeV (see the explanation above formula (183)). This means that we can neglect the electromagnetic interactions of nucleons in comparison to the strong interactions when the nucleons in a beam have energy of about 15 MeV. For kinetic energy of a proton of about 15 MeV, due to the possible turns of the spins (thermal motions), the strong field fills the sphere having a radius equal to the range of the strong interactions. When distance between falling protons and the resting tori is less than the sum of the range of the strong interactions and the radius 2A/3 of the large loop then the protons are scattered on the circular axes of resting tori of the protons that the target consists of (i.e. on the large loops having the highest energy density in the resting nucleons). In this case the p-p cross section is σ15MeV(p-p) = π(drange-strong + 2A/3) 2 = 368 mb. (106) For medium kinetic energies (a few hundred MeV, for example) the total cross section rapidly decreases due to following reasons: 1 The spin of the falling proton must be parallel or antiparallel to the relativistic velocity. 2 The spin speed of the proton decreases when relativistic speed increases - that causes the spin period of the large loops to increase whereas the mass of it decreases. This causes strong interactions outside of the torus of the proton to vanish. Therefore, the colliding protons are scattered on their circular axes which means that in this case the total cross section is σmedium(p-p) = π(2A/3 + 2A/3) 2 = 27 mb. (107) For kinetic energies a few times higher than the rest mass of the proton, there arises on the surface of torus of the proton a few new layers and the torus becomes non-transparent. During collisions of such protons with the resting target the cross section is (tori of the protons the target is composed of are transparent) σhigh(p-p) = π(A + 2A/3) 2 = 42 mb. (108) For kinetic energies a few times higher than the rest mass of the proton for antiparallel beams of nucleons is: σhigh-antiparallel(p-p) = π(2A) 2 = 61 mb. (109) The n-p scattering differs from the p-p scattering. The negative pion in the neutron (due to electric attraction) looks for the electric charge of the proton. This means that the proton can 39 see the mass of a negative pion. Because the centres of the large loops of which the negative pion consists of are in the d=1 tunnel (r=A+B) and because the radius of the large loop is 2A/3 for energy about 15 MeV and for the n-p scattering we consequently obtain: σ15MeV(p-n) = π(drange-strong + A + B + 2A/3) 2 = 671 mb. (110) Furthermore, a highly energetic p-n scattering mass of the negative pion is very small so for both total cross sections, i.e. for p-p and p-n, they should have the same value. It is easy to calculate that for very energetic π-p scattering the total cross section is approximately 27 mb – see formula (107). There should be a significant reduction of the cross section for the negative pion-proton scattering for energies of the pion equal to the energies of the S bosons in the d=0 and d=1 tunnels i.e. in the tunnels lying under the Schwarzschild surface for the strong interactions. These energies are equal to approximately 423 MeV and 727 MeV. These reductions are associated with the production of the resonances. Lengths of N-N scattering The effective ranges are as follows rsing(p-n) = r1(n-n) = r(p-p) = A + 4B = 2.7 fm. (111) For the n-n scattering there is also the second effective range. In nucleons the relativistic pions are in the d=1 state. Since pions consist of the large loops having radius equal to 2A/3 then the effective range for this state is A+B+2A/3. r2(n-n) = A + B + 2A/3 = 1.7 fm. (112) For the triplet p-n scattering, the effective range is: rtrip(p-n) = A + B +2A/3 = 1.7 fm. (113) Se also the theory of deuteron in Chapter titled “Four-shell Model of Atomic Nucleus”. When we scatter neutral particles or charged particles on neutral particles the most distant closed photons on the circle having radius equal to the effective range appears. Assume that the circular axes of nucleons are on the same plane. To calculate the lengths of the p-n and n-n scattering (the lengths are the ranges of bosons), we should to the length of a closed photon add the multiplied by two the range of strong interactions. asing(p-n) = 2π(A + 4B) + 2(A + 4B) = 22.4 fm, (114) a(n-n) = 2π(A + B + 2A/3) + 2(A + 4B) = 15.9 fm. (115) In the p-n triplet state, the directions of the spins overlap and have the same senses. atrip(p-n) = 2(A + 4B) = 5.4 fm. (116) See also the theory of deuteron. The exact result is 5.4343 fm. In the p-p scattering the length of the closed photons is equal to the range of strong interactions. a(p-p) = (A + 4B) + 2(A + 4B) = 8.1 fm. (117) New interpretation of the uncertainty principle I will refer to and call real and virtual photons, electrons and the electron-positron pairs in the Einstein spacetime as renewable particles, i.e. particles disappearing in one place of a field and appearing in another and so on. This leads to the wave function. We can say the same about real and virtual loops composed of the binary systems of neutrinos in a strongly interacting field i.e. a field composed of virtual and real large loops, created on the circular axis of the torus of the core of baryons, and bound states of large loops such as mesons. In sum, the photons and electrons in the Einstein spacetime and the free and bound large loops in a strongly interacting field behave as quantum particles. We can observe some distribution of energy, mass or mass/energy of the renewable/quantum particles in the adequate field. Such distribution can be described by means of the wave function. The value of the n (see Figure titled “The uncertainty principle for state lifetime”) depends on the spins of the objects a field is composed of. For elementary photons (elementary photon is the massless rotational energy 40 of a single neutrino-antineutrino pair), the electron-positron pairs and the large loops created in the Einstein spacetime are n=1. This means that for electrons it equals n=1/2 but new free electrons cannot be created in the Einstein spacetime (only the electron-positron pairs can be) because for Einstein spacetime n=1. Total spin of the binary system is equal to 1. At first, there appears a binary system of loops with different internal helicities composed of the rotating binary systems of neutrinos. Due to the different internal helicities of the loops in a binary system of loops, sometime the vis-a-vis binary systems of the neutrinos (the dipoles of the weak charges) placed in different components of the binary system of the loops have opposite orientation so there is the repulsion between the weak charges. Such state of the binary system of loops is unstable and the system transforms into torus-antitorus pair, for example, into electron-positron pair. Such torus-antitorus pairs are stable for the periods of spinning and the radii of the equators are equal to the radii of the loops. The electrons observed today (this not concerns the electrons in the electron-positron pairs) were created during the era when the symmetry of the Einstein spacetime was broken - this is described in more detail below. In the strong field can appear particles composed of the entangled large and/or other loops. There are different lifetimes associated with a quantum particle. For example, we can say relating to new shapes there results, for example, photon which can be the rotational energy of only one binary system of neutrinos (i=1) or of the ‘i>1’ binary systems of neutrinos. For a photon, the ‘i’ has a strictly determined value – it is the number of the entangled elementary photons a photon consists of. A photon behaves as follows: the ‘i’ entangled elementary photons disappear in some places of the Einstein spacetime and appear in other ‘i’ places and so on. This leads to the conclusion that photons sometimes behave like particles (i=1) or as a set of entangled elementary photons (i>>1). We can see that the uncertainty principle and quantum physics are associated with the appearing/disappearing mechanism i.e. with the changing distribution of the ‘i’ entangled elementary photons (in reality, there are entangled the carriers of the elementary photons). The state-lifetime is the time distance between appearance and nearest disappearance. We see that the state-lifetime of a renewable particle is associated with the length of the circumference of the loop. The length of a wave is associated with the radius of a loop so to obtain the state-lifetime of a photon we must multiply the time resulting from the length of the wave by 2π. The inverse to the resultant frequency in the uncertainty principle is not a lifetime of a renewable particle – it is the state-lifetime in one state with a determined value of the ‘i’. State-lifetime is the time of some distribution of the entangled photons which a photon is composed (i = const) whereas the lifetime of a photon is the time after which the photon 41 decays to non-entangled photons composed of entangled elementary photons i.e. the ‘i’ changes value. A photon, after its lifetime, decays to more entangled photons containing less entangled elementary photons. The four-neutrino symmetry determines the number of new photons. The uncertainty of energy does not define the sum of the energies of the entangled photons – it is the uncertainty of the distribution of energy between the ‘i’ entangled elementary photons a photon consists of. After the state-lifetime, that follows from the sum of the frequencies of the entangled photons, the distribution of energy of the entangled elementary photons changes. Emissions of photons composed of a greater number of entangled photons are more probable because then each entangled photon carries less energy. This means that also the lifetime of a photon should be longer. In describing the four-neutrino symmetry, I motivated that during nuclear transformations are emitted superphotons each composed of i=2·4 32 entangled elementary photons. Such photons, after its lifetime, decay into photons each composed of less number of entangled photons, for example, to the photon galaxies (i=4 16 ), similarly as the early Universe decayed into massive protogalaxies. Today massive galaxies dominate so we can assume that subsequently photon galaxies dominate i.e. that the lifetime of photon galaxies is equal to the lifetime of massive galaxies. When massive galaxies start to decay into smaller objects then photon galaxies should also do the same. As a result the lifetime of photon galaxies should, therefore, be 2·4 16 times longer than the lifetime of the original photon. Broken symmetry We can derive entire nature from the physical properties of the Newtonian spacetime theory and the mass density of the Einstein spacetime that is composed of the binary systems of neutrinos. In Einstein’s spacetime there appear spontaneous fluctuations. Because the fundamental field, i.e. the Newtonian spacetime, is composed of tachyons that have linear and rotational energy, then the thickened regions of the Einstein spacetime transform into the rotary vortices. As a result, the helicity and spin of all created rotary vortices must be equal to zero. This means that the rotary vortices arise as vortex-antivortex pairs. We see that such phenomena broken symmetry of the Einstein spacetime inside the vortices. In both components of the vortex-antivortex pair, the creation of electron-positron pairs is possible. When mass density inside a vortex was sufficiently high, there appeared in the left-handed vortex the positronproton transitions whereas in the right-handed vortex the electron-antiproton transitions. When the mass of a vortex is strictly determined then there is the possibility of the vortex- Protoworld transition. Our rotary vortex was left-handed so there the protons and next neutrons were created because nucleons are the left-handed particles. Next, on the circular axis appeared the protogalaxies composed of the greatest neutron stars. Due to the fourneutrino symmetry and the entangled neutrinos, the protogalaxies already grouped in larger structures already before the ‘soft’ big bang. Furthermore, because the internal energy of a neutrino is equal to the mass of the Protoworld, ‘one day’, there was the Protoworldneutrino transition. The released dark energy in such transition caused the expansion of the early Universe (i.e. of the cosmic loop). There appear in the beta decays electron-antineutrinos for the third time in the history of evolution of a left-handed rotary vortex broken symmetry of the Einstein spacetime. This means that the present symmetry of the Universe is broken due to the same orientation of the angular velocities of massive spiral galaxies in relation to their magnetic axes for majority of such galaxies, due to the electronproton asymmetry and because the Einstein spacetime contains more the electronantineutrinos than other neutrinos. 42 Summary Stability of the closed strings leads to the point mass of baryons. Point and circular mass behaves like ionized gas in stars. Such a model leads to lifetimes of particles consistent with experimental data. Constants of interactions are directly in proportion to the mass densities of the fields carrying the interactions. The factor of proportionality has the same value for all interactions. The changing running coupling for strong-weak interactions follows from the Uncertainty Principle for the virtual large loops responsible for the strong interactions. The properties of the Newtonian and Einstein spacetimes lead to the relativistic mass. The four-neutrino symmetry solves many problems associated with particle physics and cosmology. The calculated characteristic values for the pion-N and N-N scattering on the basis of an atom-like structure of baryons are consistent with experimental data. The new interpretation of the uncertainty principle leads to the evolution of the entangled photons. Throughout the history of the Universe, symmetry was broken three times. The calculated binding energy of the core of baryons is 14.98 MeV. But there is also the binding energy of the core following from the entanglement of the binary systems of neutrinos the torus inside the core consists of. The exchanged binary systems of the closed strings are moving with the superluminal speed so the involved energy is very high. It is very difficult to destroy the cores of baryons. The binding energy of a neutrino is tremendous – it is equivalent to about 4·10 50 kg so it is very difficult to destroy the neutrinos too. In our Universe there are not in existence black holes having mass densities higher than the cores of baryons. Table 5 Theoretical results Physical quantity Theoretical value Centripetal force acting on the closed string 2.2 E+133 N Lifetime of the neutron 946 s Lifetime of the muon 2.44 E-6 s Lifetime of the tau 1.88 E-12 s Lifetime of the hyperon 1.24 E-10 s Lifetime of the charm baryon c + (2260) 6.5 E-13 s Lifetime of the neutral pion 0.79 E-16 s Lifetime of the charged pion 2.8 E-8 s Coupling constant for strong interactions of the nonrelativistic protons 14.4038 Coupling constant for strong interactions of the pions 1 Maximum change of the fine-structure constant *2.2 E+133=2.2·10 ±6.2 E-5 133 43 Table 6 Theoretical results Physical quantity Theoretical value Mass of a typical neutron black hole ~ 25 masses of the sun Radius of a typical neutron black hole ~ 37 km Total mass of the dark energy 1.961 E+52 kg Mass of the baryonic matter 1.821 E+51 kg Ratio of the hidden dark energy to mass of the neutrino 0.59 E+119 Ratio of the total mass of the dark energy to the mass of baryonic matter (inside a sphere filled with baryons the ratio has a different value) 10.769805 p-p total cross section for kinetic energy about 15 MeV 368 mb p-p total cross section for kinetic energies approximately a few hundred MeV 27 mb p-p total cross section for kinetic energies approximately a few rest mass of protons 42 mb p-p total cross section for kinetic energies approximately a few rest mass of proton for antiparallel beams 61 mb p-n total cross section for kinetic energy approximately 15 MeV 671 mb p-n total cross section for kinetic energies approximately a few rest mass of the proton 42 mb π-p total cross section for very high energies 27 mb Significant reduction of the cross sections for negative 423 MeV pion-proton scattering 727 MeV Table 7 Values of the G(i) Interaction Relative value of the G(i) Strong 1 (for GS=5.46147·10 29 m 3 s -2 kg -1 ) Weak 1.9·10 -3 Electromagnetic interaction of electrons 5.1·10 2 (it is not a mistake) Gravitational 1.2·10 -40 References [1] J. K. Webb, J. A. King, M. T. Murphy, V. V. Flambaum, R. F. Carswell, M. B. Bainbridge; Evidence for spatial variation of the fine structure constant; arXiv: 1008.3907v1 [astro-ph.CO] 23 Aug 2010 44 Structure of Particles (continuation) Introduction Previously, I described the internal structure of Newtonian spacetime, closed strings, neutrinos, electrons, muons, pions and nucleons. The description of these structures is associated with the phase transitions of the Newtonian spacetime, the Einstein spacetime, and the symmetrical decays of particles in a strong field. Photons and gluons are the massless rotational energies of the neutrino-antineutrino pairs the Einstein spacetime consists of (E = hν, where h is the spin of a pair whereas ν is the frequency of spinning). Certain parts of an entangled photon can be outside the occupied states of an atom. Multiplying the Compton length of an electron by 2π, we can calculate the state-lifetime. Slowly moving electrons have state-lifetimes about 10 -20 s. This means that within one second an electron appears in 10 20 places of the Einstein spacetime. This leads to the wave function. An electron, when going through a set of slits (an electron only appears whereas the wave function is ongoing), appears many times in each slit. We cannot say for certain that an electron is going through only one slit. We can calculate the spin of stable objects (i.e. the closed strings, neutrinos, cores of baryons and protoworlds) from the mvr. The renewable pairs arise in different ways. The resultant internal helicity of the Einstein spacetime is not broken when there appear binary system of entangled loops and each loop in a binary system has different internal helicity. The resultant energy of the entangled loops multiplied by the period of spinning must be equal to h (i.e. spin is 1) because such spin have the Einstein spacetime components. Then symmetry of the Einstein spacetime is not broken - it is the reason why carriers of interactions associated with this field have unitary spin. Entangled loops exchange the binary systems of the closed strings. Since total spin of the Einstein’s spacetime cannot change then there arise the pairs of the binary systems of loops i.e. the quadrupoles. The opposite internal helicities of the loops in a binary system of loops enforce an immediate transition (possible because the Newtonian spacetime is composed of tachyons moving at a speed approximately 8·10 88 times greater than the light in spacetime) of the two entangled loops into the electron-positron pair. The Compton length of an electron is the radius of the loop. The tori, i.e. the electric charges, consist of the entangled and polarized binary systems of neutrinos the Einstein spacetime consists of. Surfaces of the tori have an appearance similar to the Ketterle surface for a strongly interacting gas discovered in 2005. The loops overlap with the equators of the tori. The binary systems of the neutrinos that the loops are composed of make half-turns on the circular axis of the torus and in the centre of it because in those places the lines of electric forces, created by the polarized binary systems of neutrinos that the torus is composed of, change their senses. The half-turns decrease the local pressure in the Einstein spacetime that causes new binary systems of neutrinos to flow into a bare electron (the absorption). This means that with the torus, i.e. with the electric charge, half of the mass of the bare electron should be associated whereas the second half of the bare mass is associated with the centre of the torus i.e. with the point mass of the electron. The torus-antitorus pairs are the stable structures for the period of spinning. On surface of the tori, all spins of the neutrinoantineutrino pairs either have the senses pointed to the interior of the torus or pointed outside the surface. This leads to the conclusion that there arises one divergent and one convergent spin field. When torus/electric-charge of electron disappears in some place then the mass of electron in this place vanishes due to the emissions of the surplus neutrino-antineutrino pairs. The radius of the bare electron is 554.32 times greater than that of the core of baryons. Outside of the bare electron, arise the virtual bare electron-positron pairs. 45 Muons consist of a contracted electron and the two different energetic neutrinos that interact with the point mass of the electron. The point mass of the electron cannot be a stable structure when it contains only one energetic neutrino because the resultant centrifugal force would not be equal to zero. Because the simplest neutral pion consists of the two binary systems of neutrinos and because the charged pion decays into a muon and neutrino, the mass of the muon should be equal to the bare mass of the charged pion minus a quarter of the mass of the neutral pion. A tau lepton consists of an electron and massive particle, created inside a baryon, which interact with the point mass of the electron. Mesons, meanwhile, are binary systems of gluon loops that are created inside and outside the torus of baryons. They can also be mesonic nuclei that are composed of the other mesons and the large loops, or they can be binary systems of mesonic nuclei and/or other binary systems. A charged pion consists of an electron and three different energetic neutrinos that interact with the point mass of an electron. This particle can transform into the neutral pion (i.e. into the binary system of the large loops) interacting with the electron-neutrino pair. The charged pion is the four-particle system. Fermions containing more than three different energetic neutrinos do not exist because two or more of the components cannot have the same internal helicity simultaneously and the sign of a weak or electric charge. Calculated below are the masses of the selected mesons: of the lightest mesonic nuclei, kaons, W and Z bosons, and the D and B mesons. A particle placed in different fields does not look the same. In an electromagnetic field, many charged pions occupy the same state when they are composed of a different number of binary systems of neutrinos so they are in different states for the electromagnetic field. In a strong field, the neutral and charged pions look the same because both contain the same two strongly interacting large loops. The spins of the two large loops are antiparallel. This means that pion in a strongly interacting field looks analogically as the electron-electron pair in a ground state of atom. This means that in the ground state of baryons (d=1) there can be only one pion. In the d=2 state there are more pions but due to their interactions with the strong field components they do not look the same. The Titius-Bode orbits for the strong interactions and only leads to the S states. Here we will calculate mass of hyperons and also selected resonances. Here I also calculated the mass of the tau lepton. Within the new non-relativistic electroweak theory, I calculated the magnetic moment of a muon, the frequency of radiation emitted by the hydrogen atom under a change of the mutual orientation of the electron and proton spin in the ground state and the Lamb-Retherford shift. Mesons Masses of the lightest mesonic nuclei We can build three of the smallest unstable neutral objects containing the carriers of strong interactions i.e. the pions (134.9661 MeV, 139.57040 MeV) and bound large loops (134.9661/2 MeV). Each of those objects must contain the large loop because only then can it interact strongly. The letter a denotes the mass of the object built of a neutral pion and one large loop a = m(neutral pion, loop) = 202.45 MeV. The parity of this object is equal to P=+1 because both the pion and the large loop have a negative parity so as a result the product has a positive value. The letter b denotes the mass of the two neutral pions and one large loop b = m(2 neutral pions, loop) = 337.42 MeV, 46 where b’ denotes the mass of the two charged pions and one large loop b’ = m(2 charged pions, loop) = 346.62 MeV. The parity of these objects is equal to P= -1. In particles built of objects a, b, and b’, the spins are oriented in accordance with the Hund law (the sign ‘+’ denotes spin oriented up, the sign ‘-‘ denotes spin oriented down, and the word ‘and’ separates succeeding shells) For example, +- and +- +++--- and +- +++--- +++++----- and etc. Because electrically neutral mesonic nuclei may consist of three different types of objects whereas only one of them contains the charged pions the charged pions should, therefore, be two times less than the neutral pions. It is also obvious that there should be some analogy for mesonic and atomic nuclei. I will demonstrate this for the Ypsilon meson and the Gallion. The Gal is composed of 31 protons and has an atomic mass equal to 69.72. To try to build a meson having a mesonic mass equal to 69.5 we can use the following equation: 69.5 Ypsilon = 8a + 14b + 9b’ = 9463 MeV (vector). Such a mesonic nucleus contains 18 charged pions, 36 neutral pions and contains 31 objects. The mass of lightest mesonic nuclei is as follows: The Eta meson is an analog to the Helion-4. Since the Eta meson contains three pions there are two possibilities. Such a mesonic nucleus should contain one charged pion but such objects are not electrically neutral. This means that the Eta meson should contain two charged pions or zero 4 Eta = a + b’ = 549.073 MeV (pseudoscalar), 4 Eta(minimal) = a + b = 539.864 MeV (pseudoscalar). The Eta’ meson is an analog to Lithion-7 Eta’ = 3a + b’ = 953.971 MeV (pseudoscalar). We see that there is in existence the following mesonic nuclei (a + b’) and (3a + b’) – which suggests that there should also be (2a + b’). However, an atomic nucleus does not exist which has an atomic mass equal to 5.5. Such a mesonic nucleus can, however, exist in a bound state, for example inside a binary system of mesons X’ = 2a + b’ = 751.522 MeV (vector). The mass of kaons To calculate the mass of the particle created in the d=0 state in a nucleon for which the ratio of its mass to the distance of mass between the charged and neutral pions is equal to sw(d=0)/w(proton) we can use the following: (mpion(+-) mpion(o)(sw(d=0)/w(proton)) = 244.398 MeV. (118) This mass interacts with the point mass of the particle which has a mass equal to (mpion(+-)mpion(o) Therefore, the total mass equals 249.003 MeV. Two such particles create the binary system having mass equal to 497.760 MeV (the components are in a distance equal to the Compton wavelength for the muon so we must subtract the binding energy) which is the mass of neutral K o kaon. This kaon can emit one particle having a mass equal to (mpion(+-)mpion(o)). The particle created as a result of this is in a charged state. If we add the radiation mass of the entire particle (the components are not at a distance equal to the Compton wavelength of the muon because there is only one charged muon) we obtain the mass of K + kaon that is equal to 493.728 MeV. Due to the strong interactions the neutral kaon decays into two pions (the coupling constant is equal to 1) or due to the weak interactions to three pions. The point mass of the proton is about times greater than the rest mass of the neutral pion so the coupling constant of the weak interactions of two pions is 2 times smaller than for the proton. This means that the K o L kaons should live approximately 527 times longer than the K o S. Earlier I calculated the lifetimes of pions. 47 The mass of W +- and Z o bosons There are in existence the W +- and Z o bosons but they are not responsible for weak interactions in the low-energy regime. We can calculate the mass of particles for which the ratio of their mass to the distance of mass between the different states of known particles is equal to Xw=w(proton)/w(electron-muon) (see formula (57)). For the kaons we obtain (mkaon(o) mkaon(+-)Xw = 79.4 GeV. (119) This is the mass of the W +,- boson. For the pions we have (mpion(+-) mpion(o),freeXw = 90.4 GeV. (120) It is the mass of the Z o bosons. For the four d states of the relativistic W pions (see Table 1) we obtain Xw(mW(+-) – mW(o))d=0 = 0.815 TeV. (121) Xw(mW(+-) – mW(o))d=1 = 140 GeV. (122) Xw(mW(+-) – mW(o))d=2 = 118 GeV. (123) Xw(mW(+-) – mW(o))d=4 = 105 GeV. (124) The signals of existence of the masses defined by formulae (121)-(124) should be very weak because in the high-energy regime abundance of the baryons with destroyed the Titius-Bode orbits for the strong interactions is very high. D and B mesons The neutral kaon is a binary system of two objects. If we divide the mass of the neutral kaon by the mass of the neutral pion, we obtain the factor Fx=3.68804 for binary systems built of two mesonic nuclei or one mesonic nucleus and an binary system or two binary systems. The mean mass of the binary system built up of two kaons is D(charm, 1865) = [(π o (134.966) + π +- (139.570))/2]Fx 2 = 1867 MeV, (125a) D(strange) = m(Eta(minimal, 539.864))Fx = 1991 MeV, (125b) K*(892) = m(244.398)Fx = 901 MeV, (125c) B = [m(Eta(minimal, 539.864) + m(K*, 892)]Fx = 5281 MeV, (125d) B(strange) = [m(Eta’, 953.971) + m(K o , 497.760)]Fx = 5354 MeV, (125e) B(charm) = [m(X’, 751.522) + m(Eta’, 953.971)]Fx = 6290 MeV. (125f) Why binary systems live longer than the lightest mesonic nuclei? It is because there changes nature of interactions. In binary systems, the weak interaction dominates so they behave in a similar way to a muon. Their lifetime is inversely proportional to mass to the power of four. The mass of the B(charm) meson is Ny=6290/105.667=59.53 times greater than mass of muon. Therefore, the lifetime of the B(charm) meson can be calculated using the following formula (the theoretical lifetime of muon is tw(muon)=2.4·10 -6 s) tw(B(charm)) = tw(muon)/Ny 4 = 1.9·10 -13 s. Hyperons and resonances Hyperons The d=2 state is the ground state outside the Schwarzschild surface for the strong interactions and is responsible for the structure of hyperons. During the transition of the W pion from the d=2 state into d=4, in the d=2 state vector bosons occur as a result of decay of the W pions into two large loops. Each loop has a mean energy equal to the E E = (mW(-),d=2 + mW(o),d=2 - mW(-),d=4 - mW(o),d=4)/2 = 19.367 MeV. (126) The vector bosons interact with the W pions in the d=2 state. The mean relativistic energy EW of these bosons is 48 EW = E((A/(2B)) + 1) 1/2 = 25.213 MeV. (127) Groups of the vector bosons can contain d loops. Then in the d=2 state there may occur particles that have mass which can be calculated using the following formula where k=0, 1, 2, 3; the k and d determine the quantum state of the particle having a mass M(+-o),k,d. The mass of a hyperon is equal to the sum of the mass of a nucleon and of the masses calculated from (128). We obtain extremely good conformity with the experimental data assuming that hyperons contain the following particles (the values of the mass are in MeV) m = mneutron + M(o),k=0,d=2 = 1115.3, (129) m = mproton + M(o),k=2,d=2 = 1189.6, (130) m = mneutron + M(o),k=2,d=2 = 1190.9, (131) m = mneutron + M(-),k=2,d=2 = 1196.9, (132) m = m + M(o),k=1,d=2 = 1316.2, (133) m = m + M(-),k=1,d=2 = 1322.2, (134) m = m + M(o-),k=3,d=2 = 1674.4. (135) Using the formulae (128)-(135) we can summarise that for the given hyperon the following selection rules are satisfied: a) each addend in the sum in (128) contains d vectorial bosons, b) for the d=2 state the sum of the values of the k numbers is equal to one of the d numbers, c) the sum of the following three numbers i.e. of the sum of the values of the k numbers in the d=2 state plus the number of particles denoted by M(+-o),k,d=2 plus one nucleon is equal to one of the d numbers, d) there cannot be two or more objects in the nucleon or hyperon having the mass M(+-o),k,d for which the numbers k and d have the same values, e) there cannot be vector bosons in the d=1 state because the d=1 state lies under the Schwarzschild surface and transitions from the d=1 state to the d=2 or d=4 states are forbidden, so in the d=1 state there can only be one W pion, f) the mean charge of the torus of the nucleon is positive so if the relativistic pions are not charged positively then electric repulsion does not take place - there is, however, one exception to this rule: in the d=1 state there can be a positively charged pion because during that time the torus of the proton is uncharged, g) to eliminate electric repulsion between pions in the d=2 state there cannot be two or more pions charged negatively, h) there cannot be a negatively charged W pion which does not interact with the vector boson in the d=2 state in the proton because this particle and the W pion in the d=1 state would annihilate, i) there cannot be a neutral pion in the d=2 state in the proton because the exchange of the charged positively pion in the d=1 state and of the neutral pion in the d=2 state takes place. This means that the proton transforms itself into the neutron. Following such an exchange the positively charged pion in the d=2 state is removed from the neutron because of the positively charged torus. Such a situation does not take place in the hyperon lambda =nW(o),d=2. Using these rules we can conclude that the structure of hyperons strongly depends on the d numbers associated with the Titius-Bode law for strong interactions (i.e. with symmetrical decays) and on the interactions of electric charges. 49 The above selection rules lead to the conclusion that there are in existence only two nucleons and seven hyperons. The spins of the vector bosons are oriented in accordance with the Hund law. The angular momentums and the spins of the objects having the mass M(+-o),k,d are oriented in such a way that the total angular momentum of the hyperon has minimal value. All of the relativistic pions, which appear in the tunnels of nucleon, are in the S state. This means that and hyperons have half-integral spin, whereas has a spin equal to 3/2. The strangeness of the hyperon is equal to the number of particles having the masses M(+-o),k,d=2 taken with the sign ‘-‘. Notice also that the percentages for the main channels of the decay of and + hyperons are close to the x, 1-x, y, 1-y probabilities. This suggests that in a hyperon, before it decays, the W(o),d=2 pion transits to the d=1 state and during its decay the pion appears which was in the d=1 state. Selected resonances The distance of mass between the resonances, and between the mass of the resonances and the hyperons or nucleons, are close to the mass of the S bosons. The lightest resonance (1236) consists of the nucleon and the S boson in the d=2 state, i.e. the (1236) consists of S(+-o),d=2{2-} and of a proton or neutron {1/2+}. The mean mass calculated of all charge states i.e. ++, +, o, -, equals 1236.8 MeV (the number before the signs ‘+’ and ‘-’ denotes the approximate value of angular momentum, whereas the ‘+’ and ‘-’ denotes the orientations of the angular momentum respectively ‘up’ and ‘down’). The parity of the S(o),d pions is assumed to be negative, and the parity of the lambda hyperon is assumed to be positive. For selected resonances we have mN(2650) = 3mS(o),d=1{2+2+2-} + 1mS(o),d=2{2+} + 1mS(o),d=4{1+} + 1mproton{1/2+} (or 1mneutron{1/2+}) = 2688 MeV (J P =11/2 - ), m(1520) = 1mS(o),d=1{2-} + m(1115){1/2+} = 1537 MeV (J P =3/2 - ), m(2100) = 2mS(o),d=1{2+2+} + 1mS(o),d=4{1-} + m(1115){1/2+} = 2145 MeV (J P =7/2 - ), m(2350) = 2mS(o),d=1{2+2+} + 2mS(o),d=4{1+1-} + m(1115){1/2+} = 2332 MeV (J P =9/2 + ), m(1765) = 3mS(o),d=4{1-1-1-} + m(1192.5)(mean value){1/2+} = 1753 MeV (J P =5/2 - ), m(1915) = 4mS(o),d=4{1+1+1+1-} + m(1192.5){1/2+} = 1940 MeV (J P =5/2+). The mass of tau lepton The charged W pion in the d=1 state is responsible for the properties of the proton. What should be the mass of a lepton in order to the mass of such pion was the radiation mass of the lepton for the strong-weak interactions in the d=1 state? From formula (63) we have swW(+-),d=1mtau,d=1/mW(+-),d=1 = emmelectron/mem(electron), (136) where swW(+-),d=1=0.762594. The calculated mass of tau lepton is mtau = 1782.5 MeV (137) Properties of fundamental particles The neutrinos interact with the point mass of the electron. They are all fermions so their physical states should be different. Neutrinos and electrons can differ by internal helicity (which dominates inside the muon) and, if not by it, by the sign of the electric charge and the weak charge (it is for the third neutrino inside a pion). The possible bound states are as follows - R e - R e(anti)L+ L-, + L e + L eR- (anti)R+, - R e - R e(anti)L LL LLA - R (anti)R+, where LLA denotes the large loop with the left helicity and antiparallel spin. + L e + L eR- LR LRA. Particle Spin helicity 1) 50 Table 8 New symbols Internal helicity Electric charge Weak charge New symbol e(anti) + L (left) + e(anti)L+ e - R (right) - eR- (anti) - R + (anti)R+ + L - Le - - R - e - R e + + L + e + L p + + L + p + L p - - R - p - R n + L 2) + nL n(anti) - R 2) - n(anti)R - - R 2) - - R + + L 2) + + L - - R 2) - + - R + + L 2) + - + L 1) The sign ‘+’ is for the parallel senses of the velocity and spin. The sign ‘-’ is for the antiparallel senses. 2) The resultant internal helicity is the same as the internal helicity of the torus having greatest mass. There are in existence the following 8 states of the carriers of the not entangled photons L1 (eR- e(anti)L+)L, L2 (L- (anti)R+)L, L3 (eR- (anti)R+)L, L4 (L- e(anti)L+)L, R1 (eR- e(anti)L+)R, R2 (L- (anti)R+)R, R3 (eR- (anti)R+)R, R4 (L- e(anti)L+)R. These eight different states are some analogy to the eight gluons. The kaon is a binary system and each component of this binary system consists of two large loops (created on the circular axis of the nucleon), an electron and a neutrino K o LL LLA e - R e(anti)L+ + LL LLA e + L eR-, K o (anti) LR LRA e - R e(anti)L+ + LR LRA e + L eR-. The mixture of K o and K o (anti) LL LLA LR LRA e - R e(anti)L+ e + L eR-. 51 New electroweak theory (continuation) Magnetic moment of the muon The muon magnetic moment in the muon magneton should be the same as for electron because the muon is the electron-type particle. There is a little difference due to the binding energy emitted by muon (see the discussion below formulae (55) and (27)) Ebinding = 0.498281845 + mradiation(muon)/2 + mpion(o),free – mpion(o). (138) This binding energy means that the mean mass of the virtual field composed of the virtual electron-positron pairs has mass Ebinding+mbare(muon). We can introduce the following symbol = 1 + Ebinding/mbare(muon) (139) The ratio of the radiation mass resulting from the interactions of the virtual pairs to the bare mass of the muon is = , (140) where =0.00115963354 (see formula (66)). The mass of muon in its bare mass is equal to the muon magnetic moment in the muon magneton = 1 + [1 + ’w(electron-proton)/(2/3)]. (141) From it, applying iteration, for mmuon=105.656314 MeV, we obtain ’ = - Δ = 1.00116592234 – 8.344077·10 -10 (see (68)) = 1.001165921508 (142) A greater mass of the muon leads to the smaller anomalous magnetic moment. Frequency of the radiation emitted by the hydrogen atom under a change of the mutual orientation of the electron and proton spin in the ground state The parallel polarisation of two vortices increases the binding energy of a system Epar = E + Ei, (143) whereas the antiparallel polarisation decreases the binding energy Eant = E - Ei. (144) Since Ei=ich/r the change of the mutual orientation of spins causes emitted energy to be Ei = 2ich/r = h, (145) and therefore = ic/r, (146) where denotes the frequency. In the hydrogen atom, there is the orbit-orbit interaction (the n = 1 Bohr orbit with the d = 1 orbit in proton). On the first Bohr orbit (n = 1) is the mass of electron melectron whereas in the d = 1 state in proton the mean mass is M = (1 – y)mW(+),d=1 + ymW(o),d=1. (147) The centre of the n = 1 Bohr orbit is inside the proton so the classical virtual electron behaves as if it was in the d = 1 state (it is the ground state in proton). The virtual mass of the classical electron is 4/3 times greater than the bare mass of electron (see the explanation in Chapter “Foundations of Quantum Physics”). The total mean mass in the d = 1 state is M’ = M + 4mbare(electron)/3. Since αi/(Mimi) = Gi/(hc) = const. and by analogy to formula (79), for the electroweak interactions of the electron with proton we obtain i = w(proton)αem(melectron/M’) 2 . (148) Because the radius of the first Bohr orbit is r1=0.5291772·10 -10 m, then applying formulae (146)-(148) we obtain = 1420.4076 MHz. (149) 52 Lamb-Retherford shift The Lamb shift is associated with the two different states of the charged pion in the d = 1 state in proton. We can calculate the Lamb shift using following formula Ei = ich/r = mic 2 . (150) The Compton wavelength of the bare particle is equal to the external radius of the torus and is defined by the following formula = rz(torus) = h/(mbarec). (151) Using formulae (150) and (151) we can obtain mi = imbare/(r/rz(torus)). (152) Applying the aforementioned three formulae, we obtain L = ic/(2 · 4r1). (153) The coupling constant we can write in following form i = w(proton)M1’m/Y 2 = 1058.05 MHz. (154) where w(proton) = 0.0187229 denotes the coupling constant for the weak interactions of the proton, m = 0.000591895 MeV denotes the radiation mass of the electron, Y = 424.1245 MeV denotes the point mass of the proton whereas the M1’ is the distance of the masses between the relativistic charged W pion in the d=1 state and the charged pion in the rest i.e. M1’ = 215.760 - 139.5704 = 76.1899 MeV. We can calculate this shift by analysing the condition that the increase in the force acting on the proton which must be equal to the increase in the force acting on the electron. The force is directly in proportion to the energy of interaction falling to the given segment. The energy of the interaction is directly in proportion to the coupling constant of the interaction responsible for the change of the value of the force. The Lamb shift is caused by the weak interaction of the mass equal to the distance of the mass between the relativistic and the rest mass of the charged W pion in the d=1 state with the radiation mass of the electron. The increase to the radius of the orbit of the electron is as many times smaller than the external radius of the torus of proton hand equivalent to how many times smaller the sum of the coupling constants for the electron is than the coupling constant of the weak interactions for the proton dr/A = (’w(electron-proton) + em)/w(proton). (155) From this dr = 2.722496·10 -16 m. For the second shell of the atom of hydrogen the frequency associated with such a shift is L = Rc[1/4 - 1/(4 + dr/r1)] = 1057.84 MHz, (156) where R=10,973,731.6 m -1 . Summary Table 9 The new electroweak theory Physical quantity Theoretical value Electron magnetic moment in the Bohr magneton 1.0011596521735 (see formula (69)) Muon magnetic moment in the muon magneton 1.001165921508 Frequency of the radiation emitted by the hydrogen 1420.4076 MHz atom under a change of the mutual orientation of the electron and proton spin in the ground state Lamb-Retherford Shift 1057.84 MHz 1058.05 MHz 53 Table 10 Mesons Physical quantity Theoretical value Mass of the K +,- kaon 493.728 MeV Mass of the K o kaon 497.760 MeV 527 Mass of K*(892) 901 MeV Mass of Eta 549.073 MeV Mass of Eta’ 953.971 MeV Mass of Ypsilon 9463 MeV Mass of Z 0 90.4 GeV Mass of W +,- 79.4 GeV Mass of D(charm) 1867 MeV Mass of D(strange) 1991 MeV Mass of B 5281 MeV Mass of B(strange) 5354 MeV Mass of B(charm) 6290 MeV Lifetime of B(charm) 1.9 · 10 -13 s Table 11 Hyperons and resonances Particle Theoretical value Theoretical value Mass J P S Hyperon 1115.3 MeV 1/2 +1* -1 Hyperon + 1189.6 MeV 1/2 +1 -1 Hyperon o 1190.9 MeV 1/2 +1 -1 Hyperon - 1196.9 MeV 1/2 +1 -1 Hyperon o 1316.2 MeV 1/2 +1 -2 Hyperon - 1322.2 MeV 1/2 +1 -2 Hyperon - 1674.4 MeV 3/2 +1 -3 Tau lepton 1782.5 MeV 1/2 Resonance (1232) 1236.8 MeV 3/2 +1 Resonance N(2650) 2688 MeV 11/2 -1 Resonance (1520) 1537 MeV 3/2 -1 Resonance (2100) 2145 MeV 7/2 -1 Resonance (2350) 2332 MeV 9/2 +1 Resonance (1765) 1753 MeV 5/2 -1 Resonance (1915) *Assumed positive parity 1940 MeV 5/2 +1 54 Liquid-like plasma The phase transitions of the Newtonian spacetime and the Titius-Bode law for the strong interactions lead to an atom-like structure of baryons. Such model leads to the pseudorapidity density, NSD-fraction in the pp collisions, temperature and density of the liquid-like plasma. Pseudorapidity density in pp collisions Electron-positron pairs that decay into photons arise close to tori/electric-charges of colliding protons that have very low energy. The ratio X1 of the energy of particles that have a transverse-momentum to the energy of emitters (i.e. of protons having atom-like structure) is X1 = 2melectron/mproton. (157) When protons collide which have a higher energy, there appears, along a transverse direction, core-anticore pairs of baryons in such way that the spins of the cores are parallel to the transverse direction. Half of such a segment has a length equal to rT rT = ED/(2H + ), (158) where the E is the amount of energy of the colliding pp pair expressed in TeV, the H + =727.44·10 -6 TeV is the mass of the charged core of a baryon and D=2A/3 is the across of a charged torus of a baryon placed inside the core (A=0.697442 fm). The segments behave in a similar way to liquid-like plasma. The energy released during the strong interactions transits towards the ends of the segments. Within the CMS (the Compact Muon Solenoid) many pp collisions take place, therefore, liquid-like plasma appears (i.e. the segments). The segments fill a prolate cylinder. Inside such a cylinder are core-anticore pairs whereas the protons that have an atom-like structure are only on a lateral surface of a cylinder with such a surface having a thickness equal to D. Since the d=1, 2 and 4 states are destroyed, inside the liquid-like plasma only arise pions, kaons and the contracted electrons having energy of approximately 4.6 MeV as particleantiparticle pairs. The components of pions arise inside the tori whereas the kaons and contracted electrons are produced in the d=0 state i.e. on the equators of the tori. Pairs appear because the conserving symmetry creations and decays are characteristic for strong interactions. All particles produced inside the liquid-like plasma have transverse-momentum – they are the non-single-diffractive fraction (the NSD fraction). The protons that have an atomlike structure produce hadrons that have momentum tangential to the surface of a cylinder also – this is the single-diffractive fraction (the SD fraction). This means that the ratio X2 of energy of the NSD hadrons that have transverse-momentum to the total energy emitted by the lateral surface of liquid-like plasma (i.e. by the protons having an atom-like structure) is (the SD fraction is emitted through the surface whereas the NSD fraction goes through the surface) X2 = X1πrT 2 HCMS/(2πrTHCMSD) = X1rT/(2D), (159) where HCMS is the longitudinal length of the liquid-like plasma. Following simple conversions we obtain X2 = X3EN, (160) where X3=0.37434 and EN is the number equal to the amount of energy per one pp collision expressed in TeV. The liquid-like plasma behaves in a similar way to a black body because the interiors of nucleons behave like a black body. This means that the energy emitted is directly in proportion to absolute temperature of a body to the power of four. The temperature of liquidlike plasma is directly in proportion to the pseudorapidity density found in a central region 55 NSD-fraction = sqrt(sqrt(0.37434·EN))·100%. (161) For energy of 0.9 TeV, we obtain the NSD fraction equal to 76.18% whereas for 2.36 TeV we obtain 96.95%. We can see that there is an increase of 27.3% from 0.9 TeV to 2.36 TeV. This theoretical result is consistent with experimental data [1]. There is a threshold for EN=2.672 TeV. For energy higher than 2.672 TeV, the NSD energy becomes higher than the energy of protons that have an atom-like structure on the lateral surface of liquid-like plasma. This means that the external layers of liquid-like plasma can separate from it explosively. The temperature and density of liquid-like plasma The Compton wavelength of the bare electron is equal to the external radius of the polarized torus (see formula (62)) so similar the characteristic wavelength for colliding nucleons, leading to liquid-like plasma, is equal to the A=0.697442 fm. It follows from the fact that in liquid-like plasma the Titius-Bode orbits for strong interactions are destroyed. Using the theory in Wien’s law we obtain that the lowest temperature of liquid-like plasma, corresponding to the characteristic wavelength A, equals 4.155·10 12 K. Using the Uncertainty Principle energy of a loop having a circumference equal to 2π·2A/3 is 67.5444 MeV, therefore, for a length equal to A the energy is approximately 283 MeV. Following such energy, a π + π - pair can be produced. We also know that for energy equal to the threshold 2.672 TeV per colliding pair of nucleons, the released energy is equal to the mass of a nucleon i.e. approximately 939 MeV. This means that the 283 MeV leads to following number E0 equal to the energy per colliding pair of nucleons expressed in TeV E0=2.672·283/939=0.805. Such energy is needed in order to create liquid-like plasma having the lowest temperature i.e. the 4.155·10 12 K. Because the temperature is directly relative to the NSD-fraction, we obtain following formula for temperature T for liquid-like plasma T = X4·sqrt(sqrt(0.37434·EN)), (162) where X4=5.6·10 12 K. For example, for energy equal 9.1 TeV per colliding pair of nucleons, we obtain the temperature of liquid-like plasma equal to approximately 7.6·10 12 K. At the lowest temperature of liquid-like plasma, with each core of baryon, energy equal to approximately 727+283=1010 MeV is present and such a core occupies volume equal to approximately V=8A 3 /3. This leads to the lowest mass density of liquid-like plasma which is 2·10 18 kg/m 3 . With an increasing energy of collisions, the volume of the core of baryons is constant whereas the released energy ER increases due to strong interactions ER=283·EN/E0 [MeV]. The density of the liquid-like plasma is ρ=(H + +ER)/V. This formula can be expressed as follows: ρ = X5(2.07 + EN), (163) where X5=0.692·10 18 kg/m 3 . References [1] The CMS Collaboration; Transverse-momentum and pseudorapidity distribution of charged hadrons in pp collisions at sqrt(s) = 0.9 and 2.36 TeV; arXiv: 1002.0621v2 [hep-ex] 8 Feb 2010. 56 New Cosmology Introduction The six parameters describing the physical state of the Newtonian spacetime and the mass density of the Einstein spacetime lead to the Protoworld. Our early Universe (the cosmic loop) arose in a similar way to the large loop responsible for the strong interactions in baryons, however, we must replace the binary systems of neutrinos that the large loops are composed with the binary systems of the greatest neutron stars – which are typical neutron black holes. The Protoworld was the big torus around the spherical mass. The surface of the torus was composed of deuterium (i.e. of electrons and binary systems of nucleons) and appeared similar to the Ketterle surface in a strongly interacting gas [1]. In centre of the torus there was mass which was composed of typical neutron black holes. The calculated mass of the entire object is M=1.961·10 52 kg. The radius of the equator of the big torus was equal to 286.7 million light-years. Our Universe appeared on the circular axis inside the big torus as the loop was composed of protogalaxies built out of typical neutron black holes. These protogalaxies already assembled into larger structures, which are visible today, before the ‘soft’ big bang due to four-neutrino symmetry resulting from the long distance interactions of the weak charges of neutrinos i.e. due to the exchanges of the binary systems of the closed strings. The anticlockwise internal helicity of our Universe was associated with the rotations of the protogalaxies and the binary systems of protogalaxies and the spin speed of the cosmic loop (the loop had spin equal to 1). Before the ‘soft’ big bang, the axes of the rotations of the binary systems of protogalaxies were tangential to the circular axis of the big torus. The calculated mass of the Universe (without the dark energy which is the remainder of the big torus and the big central mass) is m=1.821·10 51 kg. The ratio of the mass of the Protoworld to the mass of the Universe was β=10.769805. The radius of the Universe loop was equal to 191.1 million light-years. Because a neutrino is built out of the closed strings moving with a speed 2.4248·10 59 times higher than the c, the energy (not mass) frozen inside a neutrino (then not measured by an external observer) is equal to the M M = mneutrino(2.4248·10 59 ) 2 , (164) where mneutrino=3.33493·10 -67 kg. This means that there is the possibility of the Protoworldneutrino transition. Before such a transition, the Protoworld had a mass equal to the M. This is because inside this object was bound energy of the Einstein spacetime equal to E=mc 2 . During the transition, this energy appeared in the new neutrino as the lacking dark energy. There arose regions filled with additional binary systems of neutrinos as the remnant of the disintegrated Protoworld. It is the dark energy which had and has mass/energy equal to the M. The structure of the Protoworld meant that there were four inflows of dark energy into the cosmic loop. Cosmic structures in the Universe The four-neutrino symmetry leads to following formula which describes the number of objects found in the structures of the Universe D = 4 d , (165) where d=0,1,2,4,8,16 for a flattened spheroid-like structures, and d=3,6,12 for a chain-like structures. The four-neutrino symmetry law concerns the neutrinos in the pions, the binary systems of neutrinos in one component of a double helix of entangled photons, the nucleons in protonuclei (for example, there can appear the tetraneutrons), the typical neutron black holes in protogalaxies, the binary systems of protogalaxies (the protogalaxies I also refer to as massive galaxies) in the Universe. 57 The cosmic structures composed of the binary systems of protogalaxies I refer to as follows: d = 0 is for single object (i.e. the binary system), d = 1 is for group, d = 2 is for supergroup, d = 4 is for cluster, d = 8 is for supercluster, d = 16 is for megacluster (the early Universe was the megacluster of the binary systems of protogalaxies), d = 3 is for chain, d = 6 is for superchain, d = 12 is for megachain. Black body spectrum How is the black body spectrum produced? Large loops are produced from energy released during nuclear transformations. The distance between the binary systems in the Einstein spacetime is 554.321 times greater than on the torus of the proton. The mean distance between the binary systems of the neutrinos on the torus is approximately 2π times greater than the external radius of the neutrino. From these conditions, we can calculate that approximately 7.5·10 16 binary systems of neutrinos are on the large loop. This means that 512 such loops contain approximately 3.84·10 19 binary systems of neutrinos. A superphoton consists of 2·4 32 =3.69·10 19 binary systems of neutrinos (it is the double helix loop and each helix loop is composed of 256 megachains). This means that superphotons can appear which have energy equal to 67.5444 MeV. An equivalent of this amount of energy transits into the equator of the torus and each superphoton has a length equal to 2πA, where A denotes the external radius of the torus (the equator of the torus is the trap for the photons). This length is associated with the internal temperature of a nucleon/black-body via the Wien’s law equation λT[m]·T[K]=0.002898. This means that the internal temperature of nucleons is 6.6·10 11 K. When the energy of such a set of superphotons is 208.644 MeV (the relativistic mass of the neutral pion in the d=1 state) then such a set transits to the d=1 state and the length of each superphoton increases to 2π(A+B). Such photons are emitted because in the d=1 state there can only be one portion having energy equal to 208.644 MeV. This means that the measured frequency of the photons related to the maximum of intensity is A/(A+B)=0.58154 times lower than would result having used Wien’s law equation. Using today’s temperature of the Universe (2.725 K) we obtain λT=1.0635 mm, λν=1.8287 mm and ν=163.94 GHz. Why is the length of the photons increased from 2πA·2/3=2.9214·10 -15 m to 1.8287·10 -3 m i.e. by approximately 6.26·10 11 times? The answer to this is for the following two reasons (see the further explanations). The decay of each superphoton to the photon galaxies increased the length of the early photons 2·4 16 =8.6·10 9 times. Initially, the superphotons overlapped with the cosmic loop so it had a radius of approximately 0.1911 billion light years. Today the elements of a superphoton interacting with the baryonic matter fill the sphere and its radius is approximately 13.4 billion light years i.e. the radius and the length of the early photons increased about 70 times. This means that the length of the early photons increased approximately 6·10 11 times. We see that this theoretical result is consistent with the observational fact discussed. Because of the broadening of the d=1 state/tunnel we observe a black body spectrum. In nucleons, the virtual photons appear on the circular axis and are in the d=0 and d=1 states as well. This makes their mean length equal to 2π(2A/3 + A + A + B)/3 = 4.95 fm and such is mean distance of interacting deuterons on the big torus. In reality, the photons arise as the gluon loops that become the photons outside the strong field. For torus composed of the 58 binary systems of the cores of baryons the mean distance of interacting pairs is 2πA = 4.382 fm. Because the mass is directly in proportion to the area of the torus so the mass of the Protoworld composed of deuterons is almost the same as the object composed of the binary systems of cores of baryons {(939.54 + 938.27 – 2.22)/(2·727.44)}/(4.95/4.382) 2 = 1.012. The anisotropy power for the CMB radiation The electric charge of the core of a nucleon is created by the spinning loop inside the torus of the core whereas the lines of electric forces converge on the electric charge/circle. The direction of the magnetic vector associated with the electric charge overlaps with the axis of the torus. Our Universe arose and developed as the cosmic loop inside the torus of the Protoworld. The magnetic vectors of the neutrons within the cosmic structures were tangent to the cosmic loop. Magnetic polarisation dominated because the neutrons are electrically neutral. This means that the cosmic loop was also the magnetic loop. The cosmic structures in the expanding cosmic loop were mostly moving in directions perpendicular to the cosmic loop. Due to the law of conservation of spin, the magnetic polarization of the protogalaxies should be parallel to the direction of the relativistic motions of the protogalaxies i.e. they should be perpendicular to the cosmic loop. This means that there were the 90 o turns of the magnetic axes of the protogalaxies. When the gravitational field of the big torus that squeezed the early Universe disappeared there started an evaporation of the typical neutron black holes the baryonic loop consisted of. The neutrons placed on the surface of the neutron stars, in respect to the weak decays, had emitted the electrons and entangled electron-antineutrinos. Due to the beta decays, protons appeared on the surface of the neutron black hole. The electric repulsion of the protons meant that the protons had assembled on the equator of neutron black hole. Ultimately, the electric repulsion exceeded the gravitational attraction and what took place were separations of the protons from the surface of the star in the plane of the equator. The proton beams carried forward some neutrons. Rising atomic nuclei caused the nuclear explosions in the region between the surface of the neutron star and the Schwarzschild surface. Since the neutron stars increased their size due to inflows of the dark energy, this energy became free. The succeeding inflows of dark energy produced during the transition of the Protoworld caused an expansion to the neutron black holes. This meant that above the Schwarzschild surfaces more photons, electrons and closed currents of protons recurrently appeared. Planes of the currents were tangent to the surface of the expanding cosmic loop whereas the magnetic axes associated with such currents were perpendicular to the surface. The photons that appeared were moving most often in directions tangent to the surface of the exploding cosmic loop. On the surface were also cold and hot regions. The cold regions were in the peripheries of the exploding cosmic structures. They arose due to the redshift of the entangled binary systems of neutrinos (i.e. the carriers of the photons) produced in the beta decays on equators of typical neutron black holes before their expansion. The hot regions were near the magnetic poles. They arose due to the beta decays after the expansion of the typical neutron black holes – it was due to the lack of the redshift. There were 90 o angles between the directions of motions of the hot photons (the radial directions) and the directions of motions of the cold photons (directions tangential to the equators). There was also electron and proton plasma. This means that there were adequate conditions for the electric polarization of the photons due to the Thomson scattering. The polarized photons due to the scattering on the electric charges were moving perpendicular to the surface of the cosmic loop. The polarized photons were moving away from the surface i.e. were moving in cooler parts of the cosmos. Some of them fell into the opposite part of the expanding cosmic loop. Today we should observe that the electrically polarized early photons in the CMB and such polarization should be tangent to the 59 today celestial sphere. Enlargement of the neutron stars was easier in the peripheries of the early cosmic structures so in these regions intensity of the E-mode polarization was higher. Because the surface of the expanding cosmic loop was the closed pipe/chain, we can assume that on the surface were N=4 12 binary systems of protogalaxies i.e. a megachain. We can calculate the angular size of the structures using the formula L=sqrt[(360 o ) 2 /N], where N denotes the number of structures, whereas the multipole moment can be calculated using the formula I=180 o /L. On the surface of the expanding cosmic loop was one megachain (L=360 o , I=0.5). There were 4 4 superclusters (L=22.5 o , I=8), 4 6 superchains (L=5.63 o , I=32), 4 8 clusters (L=1.41 o , I=128), 4 9 chains (L=0.703 o , I=256), 4 10 supergroups (L=0.352 o , I=512), 4 11 groups (L=0.176 o , I=1024) and 4 12 single objects (L=0.088 o , I=2048). The anisotropy power of the quadrupole is associated with the energy emitted during the Protoworldneutrino transition. The megachain on the surface of the cosmic loop then decayed into 16 parts each containing 16 superclusters (L=90 o , I=2). This is known as the quadrupole. In the dark energy the electron-positron pairs had appeared. The energy of the photons per neutron associated with the weak interactions of the radiation mass of the pairs with dark energy can be calculated using the formula XL = amneutronα’weak(electron-proton) = 12.197 eV/neutron, (166) where a=0.001159652, mneutron=939.54·10 6 eV, α ’ weak(electron-proton)=1.11944·10 -5 . This energy is inside the sphere filled with dark energy (radius is 20.9±0.1 billion light years – see further explanation in this paragraph and Chapter titled “Radius of the Universe and the Hubble constant”) which meant that energy inside the sphere filled with baryons (radius is 13.4±0.1 billion light years) is YL = al 3 XL = 3.22 eV/neutron, (167) where al=13.4/20.9=0.6415. Because there are β=10.769806 less nucleons in the Universe than were in the Protoworld released energy per nucleon in the Universe was, therefore, 60 ZL = βYL = 34.7 eV/nucleon. (168) The released nuclear energy was L0=7.70 MeV/nucleon and today the temperature is T=2.73 K. Therefore, the energy of ZL leads to following temperature associated with the Protoworldneutrino transition TL = T ZL/L0 = 1.23·10 -5 K. (169) Because the anisotropy power is equal to TL 2 the anisotropy power of the quadrupole is equal to 1.51·10 -10 K 2 =151 μK 2 . Our early Universe was a loop composed of typical neutron black holes, therefore, due to beta decays there appeared protons and electrons. Under the Schwarzschild surface appeared atomic nuclei and there were the electron-proton weak interactions. The circumference of the large loop changes due to the weak electron-proton interactions. The coupling constant for strong interactions of the large loops is equal to 1 and such interactions led to the mean temperature of the Universe today of about 2.73 K. The coupling constant for the weak electron-proton interactions is 1.11944·10 -5 , therefore, the mean amplitude of temperature fluctuations for the weak electron-proton interactions is 30.56 μK on an angular scale of about 11 degrees on the sky. Today it is half an angular distance between the largest structures i.e. the megachains of the binary systems of massive galaxies. This leads to the mean anisotropy power equal to 934 μK 2 . When the mass density of the Einstein spacetime increases (the additional energy is the dark energy) then additional particle-antiparticle pairs appear. This means that mass density and temperature fluctuations increase. The largest peak/maximum is associated with the first inflow of dark energy to the cosmic loop. The big torus before the transition from matter into dark energy consisted of binary systems of nucleons. Afterwards the transition of the big torus consisted of two dark energy films moving in opposite directions. In nucleons, the spin speeds are tangent to the surface of the torus of a nucleon. The spin speeds of the binary systems of neutrinos in the torus of the nucleon are from c/3 to c and the average speed tangent to the torus is equal to 2c/3. This means that radial speeds are on a scale from zero to 0.94281c with the average radial speed equal to 0.745356c. A similar theory can be acknowledged by examining the big torus after the transition. Before the transition, inside the big torus there were also nucleons moving from the surface of the big torus towards the cosmic loop and then, just after the transition, dark 61 energy appeared in the cosmic loop. The maximum mass density of the dark energy flow associated with the dark energy film moving towards the cosmic loop was moving at a speed equal to 0.745356c. This maximum approached the cosmic loop after 128 million years. This means that the maximum approached the cosmic loop just after the decaying of the superphotons and cosmic loop to the chains L=0.703 o , I=256 (118 million years since the transition – see Paragraph “Acceleration of expansion of the Universe”). We can assume in approximation the first maximum is for such a value of the multipole moment i.e. for about I=256. The mass of the first inflow of dark energy was equal to the 1-(2c/3) 2 part of half of the mass of the big torus i.e. it was m1/m=1.3090 times greater than the mass of the cosmic loop. Due to the law of conservation of energy, this dark energy moving with a radial speed equal to v=0.745356c accelerated the front of the baryonic mass to a radial speed equal to v1=0.5612c. This is because v 2 m1/m=v1 2 (1+m1/m). The second inflow was due to the expansion of the dark energy in centre of the torus. When the front approached the centre/circle of the expanding cosmic loop, the front of cosmic loop was at a distance of 191.1·v1/2v=71.94 million light years. The mass of the dark energy that flowed into the cosmic loop was m2/m=(4α/360 o )·(727.44-318.2955)/67.5444=1.3885 times greater than the baryonic matter, where tgα=v1/2v. Following the two first inflows, the mass of the dark energy inside the cosmic loop was (m1+m2)/m=2.6975 times greater than the baryonic matter. The radial speed of the front of the baryonic matter was equal to v2=0.6366c because v 2 m2/m+v1 2 (1+m1/m)=v2 2 (1+(m1+m2)/m). Similar calculations for the third inflow of dark energy shows that the ratio of the mass of dark energy that flowed into the expanding cosmic loop to the mass of baryonic matter was equal to m3/m=(2α1/360 o )m1/m=0.1592, where tgα1=(v1+v2)/4v. After the three first inflows, the mass of the dark energy inside the cosmic loop was (m1+m2+m3)/m=2.8567 times greater than the baryonic matter. The radial speed of the front of the baryonic matter was equal to v3=0.6415c because v 2 m3/m+v2 2 (1+(m1+m2)/m)=v3 2 (1+(m1+m2+m3)/m). This means that the front of the fourth inflow could not approach the front of the baryonic matter on the opposite site of the expanding cosmic loop. Today v3=0.6415c is the radial speed of the front of the baryonic matter. The fourth inflow only enlarged the cosmic structures. The inflows produced are also protuberances composed of the dark energy and baryonic matter. This caused some of the most distant cosmic objects to have a redshift greater than the 0.6415. After the first inflow of dark energy, the total mass of the cosmic loop increased 2.309 times. It also increased temperature fluctuations to 70.6 μK and anisotropy power to 4980 μK 2 . The energy from the particle-antiparticle annihilations tried to accelerate the surface of the cosmic loop to a speed equal to c. After some time, the collisions of the binary systems of neutrinos and the interactions of the dark energy with the Einstein and Newtonian spacetimes evened the dark energy field and the front of it was and continues to move with the speed c. The second, third and fourth maximums are also associated with the inflows of the dark energy into the early Universe. The second was produced by the central mass in the big torus whereas the third and fourth by the opposite part of the big torus – direct flow and the flow after the compression in the cosmic loop was produced. The maximums of the mass density of the dark energy flows approached the centre of the expanding Universe (initially it was the circle) after 256 million years since the transition (multipole moment approximately I=512), 384 million years (multipole moment approximately I=768) and 740 million years (multipole moment approximately I=1479). 62 Polarization of the CMB Because early cosmic structures were neutron black holes, the decoupling of the photons and electric charges from the expanding cosmic structures was possible when these particles crossed the Schwarzschild surface. This was when angular sizes increased approximately two times since the maximum density of the cold photons was at its highest on the surfaces of the neutron black holes. The ionized matter, i.e. the protons, electrons and ionized atoms were between the surfaces of the neutron stars and the Schwarzschild surface. The scenario was as follows. The inflow of dark energy had increased the density of the Einstein spacetime inside the neutron black holes that is what increased their angular sizes. Next, above the Schwarzschild surface appeared ionized matter. When the radius of the neutron black holes increased more than two times, there appeared hot and cold photons moving tangential to the surface of the expanding cosmic loop. Due to the Thomson polarization theory, there appeared E photons. We can see that at first there appears anisotropy power maximum (i.e. maximum for density fluctuation of the dark energy and temperature fluctuation), followed by the maximum for density of ionized matter and then the maximum for the E polarization. The CMB polarization was highest when the produced velocity gradient was at its highest (i.e. the neutron black holes swelled). The velocity gradient, i.e. the polarization spectrum, is out of phase with the density spectrum, i.e. with the temperature anisotropy. For the maximums of the E polarization, we should observe multipole moments equal to approximately I≈128, 256, 384, and 740. The most energetic early photons had energy of about 8.8 MeV – which is the binding energy of the nucleons inside iron. The characteristic energy for the beta decays is 0.754 MeV. Furthermore, the maximum temperature fluctuations for the scalar E-mode polarization should be approximately 8.8/0.754=11.7 times lower than the maximum temperature fluctuations for the densest matter i.e. 70.6/11.7=6.1 μK. The maximum anisotropy power associated with the scalar E-mode polarization should be approximately 37 μK 2 . This was for the multipole moment I=384 because the density of ionized matter was at its lowest then, and the ranges of the photons was greatest and the E polarization were strongest. The last maximum of the E-mode is lower than the last but one because there was also an inflow of baryonic matter that increased the mass density of the ionized matter. The obtained value is only a rough estimate. The peak for I=256 for the E polarization is partially masked due to the similar conditions leading to this peak and the peak for I=384. The peak for I=128 for the E polarization is lower than the peak I=384 due to a higher mass density of the electric charges. The peak for I=740 is lower than the peak I=384 because some part of the energy of the dark energy was absorbed by the baryonic matter in the opposite part of the cosmic loop. We can see that the CMB strongly depends on the atom-like structure of baryons, on the new interpretation of the uncertainty principle (the decays of entangled photons) and on the new cosmology i.e. on the evolution of the Protoworld and on the initial distribution of the binary systems of protogalaxies associated with the four-neutrino symmetry. Radius of the Universe and the Hubble constant During the era of neutron stars and big stars 80% of free neutrons were transformed into iron (about 92%) with impurity of nickel (about 8%) and 5.81% into helium - this means that approximately 40% of neutrons were transformed into protons (see Paragraph titled “Abundance of the chemical elements…”). During the decay of a neutron energy equal to approximately 0.76 MeV is released – about 0.30 MeV per each nucleon in the Universe. Nuclear binding energy was also released. Because the binding energy per nucleon inside iron is 8.79 MeV, whereas inside helium it is 7.06 MeV energy of 7.4 MeV per each nucleon was 63 released into our Universe. This sum is equal to L0=7.7 MeV per each nucleon. This means that energy of the CMB (without the ripples) is Ebackground = mL0c 2 /mneutron = 1.32 . 10 66 J. (170) We know that today the density of the energy of the microwave background radiation is equal to background=4.17 . 10 -14 J/m 3 . The formula is therefore 4RCMB 3 /3 = Ebackgroundbackground , (171) which results that the mean radius of the sphere filled with CMB is RCMB=1.96·10 26 m, i.e. 20.7 billion light-years (precisely 20.7±0.1). Such a radius, in approximation, also has a sphere filled with dark energy (approximately 20.9±0.1 billion light-years). The Hubble constant H is defined as H=c/Rsphere, with its dimension km . s -1. Mps -1 today which is H=47. Today the radius of the sphere filled with the baryonic matter is 0.6415c·20.9=13.4 billion light years (precisely 13.4±0.1). Outside this sphere but in distance smaller than 20.8 billion light-years, due to the protuberances in the thickened Einstein spacetime, there can be only not numerous cosmic objects. Acceleration in the expansion of the Universe Using the formula tlifetime=λ/c, we can calculate the lifetime of a vortex/photon which has a circumference equal to the λ. At the beginning of the ‘soft’ big bang, the length of the photons coupling the structures inside a binary system of protogalaxies was equal to the circumference of circle drawn by the typical peripheral neutron black holes in rotating the binary system of protogalaxies. It was 2π times longer than the mean distance between the binary systems of protogalaxies in the cosmic loop because the planes of rotation of the binary systems were perpendicular to the cosmic loop. This means that the size of protogalaxy was equal to the radius of the circle drawn by the peripheral black holes. Because in the cosmic loop there were 4 16 binary systems of protogalaxies then mean distance between the planes of rotation of the binary systems of protogalaxies was 0.28 light years. The circumference was 1.76 light years so the lifetime of such a photon galaxy would be 1.76 years. A superphoton (the entangled photons coupled the cosmic structures) consisted of 2·4 16 photon galaxies so it decayed into photon galaxies after 15.09 billion years. The lifetime of a photon galaxy is considerably longer than the age of the Universe today – photon galaxies will live approximately 3.9·10 12 years (and will decay into 256 fragments). The photon galaxies coupling the cosmic structures in a galaxy lead to an illusion of present of a dark matter – the illusion follows from the fact that the photon galaxies are the massless particles. The cosmic loop was the left-handed double helix loop that was composed of protogalaxies. Electromagnetic interactions of electrons are responsible for the structure of the DNA. Moreover, electrons are right-handed so the DNA always winds to the right. Due to the succeeding decays of the superphotons, the cosmic loop also decayed. The free binary systems of massive galaxies appeared 7.54 billion years after the transition of the Protoworld into a neutrino. The free groups appeared 1.89 billion years after the transition, supergroups after 472 million years, chains after 118 million years, clusters after 1.84 million years, superclusters after 115 thousand years and the free megachains after 1.76 years. Due to the inflows of dark energy into matter a few billion years after the transition of the Protoworld into a neutrino, the percentage of the matter and dark energy changed. Just after the first inflow of dark energy into loop of matter, there was approximately 43% of the matter and 57% of the dark energy whereas today there is approximately 26% of matter and 74% of dark energy (see Paragraph titled “Matter and dark energy”). This means that over time the rate of the expansion of the Universe changed – it was the period two billion years after the transition. Due to the turbulence in the compressed dark energy inside the cosmic loop, finite 64 regions of the dark energy moving in the Einstein spacetime appeared. Since there are cosmic structures, the upper limit for a redshift for quasar having a mass equal to a group of galaxies is 7, for a massive protogalaxy 8, whereas for a supercluster of typical black holes 10. The maximum observed redshift should not exceed 16. Due to spacetimes, the finite regions quickly disappeared (in a cosmic scale). To calculate the distance to a cosmic object, we can calculate the redshift z using the formula whilst calculating the General Relativity z=[(1+zob) 2 -1]/[(1+zob) 2 +1], where zob is the observed redshift. Why are Type Ia supernovae fainter than when they result from the z? This is because the last formula was derived using incorrect initial conditions i.e. the dynamics of the ‘soft’ big bang is different. This means that we cannot say for certain whether the General Relativity is incorrect. Previous calculations show that for zob=0.6415 the massive spiral galaxies are on the surface of the sphere filled with baryons whereas using the above formula the results are that they appear at a distance approximately 3.8 billion light years from the surface. This means that supernovae Ia are in reality at a greater distance from us than from the result using the above formula. We can see (see Fig. titled “Discrepancy for the formula….”) that the discrepancy for z 65 quasars about 7.5 billion years before the decays of the photons. The quasars with low redshift arose in the collisions of galaxies. Due to the four-neutrino symmetry the emission lines of hydrogen, helium, oxygen and iron (of carbon and magnesium also) are the brightest lines – it suggests also that the new cosmology is correct. The second flare up of the Universe leads to the illusion of acceleration of expansion of the Universe about 5.7 billion years ago. The constant and number of photons in cubic meter Using the Einstein-de Sitter model the critical density is E-S = 1.9·10 -26 h 2 kg/m 3 , (172) where h is associated with the Hubble constant H by relation H = h·100 (km/s)/Mps. (173) We know that the Hubble constant has a value equal to H=47 therefore, the critical density is E-S = 4.2·10 -27 kg/m 3 . The ratio of the radius of a sphere filled with baryons to the radius of a sphere filled with dark energy is equal to approximately al=13.4/20.9=0.6415. The mass density inside the sphere filled with baryons is (baryonic matter plus dark energy) = m(1 + βal 3 )/(Val 3 ) = 8.28·10 -28 kg/m 3 , (174) where V=3.2·10 79 m 3 . The ratio of the mass density inside a sphere filled with baryons to the critical density is = /E-S = 0.02. How many photons are present in a cubic meter? Initially, the number of superphotons was equal to the number of neutrons in the cosmic loop and was associated with the transitions of the electron-positron pairs into neutrons in the region of the Einstein spacetime having an anticlockwise internal helicity and a sufficiently high mass density. About 15.09 billion years following the transition, 2·4 16 photon galaxies per each initial superphoton appeared. By knowing the mass of our Universe and by knowing the mass of a nucleon, we can calculate the total number of nucleons in existence. This is equal to 1.09·10 78 so the total number of photons inside a sphere filled with CMB radiation is today equal to 1.09·10 78 ·2·4 16 =0.94·10 88 . The volume of a sphere filled with CMB radiation is 3.2·10 79 m 3 therefore, in one cubic meter there should be approximately 300 million photons. Abundance of chemical elements before the era of the big stars Due to the four-neutrino symmetry and the weight equilibrium before the era of big stars, per each free 256 nucleons there were 64 groups each containing 4 nucleons, 16 supergroups each containing 16 nucleons, 4 chains each containing 64 nucleons, and 1 cluster containing 256 nucleons. As a result, the abundance was as follows (total number of the nuclei is 341) Free nucleons 75.07 % (hydrogen was created from them) Groups 18.77 % (helium was created from them) Supergroups 4.69 % (oxygen was created from them) Chains 1.17 % (iron was created from them first of all) Clusters 0.29 % (First Pu-244 and then lead was created from them first of all) Abundance of chemical elements immediately after the era of big stars The observed ‘oscillations’ of neutrinos are the only exchanges of free neutrinos for which the neutrinos in the non-rotating-spin binary systems of neutrinos that the Einstein spacetime is composed of. This means that on the basis of such ‘oscillations’ we cannot calculate the mass of neutrinos. To explain the solar neutrino problem without the neutrino ‘oscillations’ (impossible because of the tremendous energy frozen inside them) we must assume that inside 66 the sun and other stars, on the surfaces separating the layers of chemical elements, the GASER (Gamma Amplification by Stimulated Emission of Radiation) works. The energy of emitted quanta in the nucleon-helium transformation is 7.06 MeV. These are quanta group because of the four-neutrino symmetry. This means that their associations contain 1, 4, 16, 64, 256…. quanta. The total energies of the possible associations are approximately 7 MeV, 28 MeV, 113 MeV, 452 MeV…. The association having energy of approximately 28 MeV disturbs the 4 nucleons and causes these nucleons to transform into helium (in such a transformation the next association having energy about 28 MeV is emitted). The association having energy equal to approximately 113 MeV disturbs the 14 nuclei of helium and causes these nuclei to transform into iron or nickel (in such a transformation the next association having energy equal to approximately 97 MeV is emitted). The other associations are useless. This means that in the core of a star the associations containing 4 and 16 quanta are amplified. We see that there are two basic channels of nuclear transformations in the core of star: hydrogen into helium, and helium into iron (with an impurity of nickel). The GASER and the four-neutrino symmetry leads to the conclusion that the abundance of chemical elements (in the Universe) should have higher 'peaks' for 1, 4, (16), 56, (208) nucleons. This is consistent with observational facts. Assume that the released energy in the centre of the sun takes place only as a result of neutrons-helium transformations. For example, the transformation of 112 neutrons into 28 nuclei of helium releases energy equal to 791 MeV. Moreover, 56 electron-antineutrinos are emitted. Assume that now the GASER is implemented. To release energy of approximately 791 MeV 4 nuclei of iron-56 should arrear as a result of helium-iron transformations (about 388 MeV) and 14 nuclei of helium as a result of neutrons-helium transformations (about 395 MeV). During these two main channels of nuclear transformations, the same amount of energy should be released. In the first channel 8 electron-antineutrinos are absorbed (because of the 8 processes inverse to the beta decay), whereas in the second 28 electron-antineutrinos are emitted (because of the 28 beta decays). Therefore, during these two transformations 20 electron-antineutrinos are emitted. The concluding result depends on abundance of protons and neutrons in the centre of the sun. In the centre, the density of the nucleons is sufficiently the formula (196) would be valid (there is approximately 3/8 protons). When the GASER acts such abundance leads to emission of 22 electron neutrinos - it is about 39% of the expected number of the electron neutrinos. When the GASER does not act and when the abundance of protons is 100% 56 electron neutrinos are emitted. We can also assume that in stable stars there is energy equilibrium for the dominant processes of nuclear transformations. Because the nuclear binding energy per nucleon has the value 8.79 MeV for the iron-56, for the helium-4 it has 7.06 MeV. There should, therefore, be approximately 100%·(8.79-7.06)/7.06=24.5% of helium and 75.5% of hydrogen if we do not take into account the more massive nuclei. Immediately after the era of the big stars, the abundance of helium and hydrogen differed. We can calculate the binding energy per nucleon in iron in cores of the big stars. The simplest large loop consists of two binary systems of neutrinos and has energy 67.5444 MeV. This means that energy of binary system of neutrinos (its spin is 1) is approximately 33.77 MeV. When the ratio of mass density of the thickened Einstein spacetime inside core of a big star to its mean mass density outside star is higher than approximately (939+33.77)/939=1.036 the thickened Einstein spacetime intensively emits energetic photons. Since binding energy per nucleon is directly proportional to mass density of the Einstein spacetime then binding energy per nucleon in iron in the cores of a big star was 8.79·1.036=9.11 MeV. This result leads to conclusion that immediately after the era of big stars was approximately 29% of helium and 71% of hydrogen. What were the causes of 67 the creation of such a composition of matter? The first reason is the initial abundance of chemical elements. The second cause is associated with the values of the nuclear binding energy per nucleon. Finally, the third reason is the law that says that the released binding energy for the dominant types of nuclear transformations should have the same value. We assume that the big stars exploded when all the heaviest nuclei were transformed into iron (with an impurity of nickel) and that the heaviest nuclei contained 256 nucleons (i.e. Nobel- 256 and Lorens-256) they have a binding energy equal to 7.06 MeV per nucleon (they are extremely unstable so we can treat them as a set of almost free alpha-particles). We know that luminosity is almost directly proportional to mass of a star to the power of four. My theory, however, leads to the conclusion that the lifetime of a star is inversely proportional to its mass to the power of four. This means that the lifetime of a star is inversely proportional to its luminosity. In brief, a history of the solar system is as follows. First, there was a big star - the Oort’s cloud is remnant of the era of big stars. Next, there followed the supernova of an Ia type - the Kuiper’s belt is remnant of the supernova. Now, there is the sun. The dark matter is composed most of all of Fe+Ni lumps which were produced during the era of big stars. The temperature of these lumps is equal to the CMB radiation so detecting them is extremely difficult. The dark matter is also composed of stone+iron lumps that were produced by the supernovae. Table 12 Big stars just after the beginning of the ‘soft’ big bang Composition Composition at the end Released binding at the beginning energy per nucleon 20% H-1 71% H; 100%·2.05/7.06=29% He 7.06 MeV 20% He-4 20% Fe-56 2.05 MeV 20% 0-16 20% Fe-56 1.11 MeV 20% 64 X 20% Fe-56 0.00 MeV 20% 256 Y 20% Fe-56 2.05 MeV From the results shown in Table 12 we can see that just after the era of the big stars, there was 4 times as much dark matter than visible matter. During the explosions of the supernovae the first thing produced is proton-neutron symmetrical nickel followed by Fe-56, Si-28, N-14, Li-7. This is because in extremely high temperature the decays should be symmetrical – for example, we can see the series: 56, 28=56/2, 14=28/2=56/4, 7=56/8; similarly also for Ni-64, S-32, O-16. Table 13 Stars of second generation with working the GASER Composition at Nuclear transformations Released binding the beginning energy per nucleon 71% H-1 H-1 He-4 7.06 MeV 29% He-4 He-4 Fe For 1 part of the H-1 He-4 is 7.06/1.73=4.081 parts of the He Fe Over time, the amount of He decreases 8.79-7.06=1.73 MeV About 0% Fe-56 Over time, the amount of Fe increases After the era of big stars, i.e. about 20 billion years ago, there was 71% of hydrogen and 29% of helium. Today is 75.5% of hydrogen and 24.5% of helium. Such composition we obtain on assumption that during the 20 billion years 3.1% of hydrogen transformed into helium 71%·0.031 = 2.2% i.e. there is 71% – 2.2% = 68.8% of hydrogen. From Table 13 68 follows that simultaneously, due to the GASER, 31% of helium transformed into dark matter i.e. 29%·0.31 = 9% whereas abundance of helium is 29% + 2.2% – 9% = 22.2%. When we omit the dark matter we obtain 100%·68.8/(68.8 + 22.2) = 75.6% of hydrogen and 100%·22.2/(68.8 + 22.2) = 24.4% of helium. We should notice also that 9%/2.2% ≈ 4.081 (see Table 13). Matter and dark energy The ratio of the radius of spheres filled with baryonic matter (visible and dark) to the radius of spheres filled with dark energy is bl=13.4/20.9=0.6415. Due to the fact that dark energy is the β times greater than the baryonic matter inside the sphere filled with baryons, we should observe 1 part of baryonic matter (visible and dark) per βbl 3 =2.843 parts of dark energy. This leads to the conclusion that inside a sphere filled with baryons there is approximately 26% matter and 74% dark energy. After the era of big stars, about 9% of visible matter transformed into dark matter. This means that today the matter consists of approximately 80% + 26%·0.2·0.09 = 80.47% of dark matter and 19.53% of visible matter i.e. there is around 21% dark matter and 5% visible matter. It is very difficult to detect dark matter (the illusory and real parts) because the real part has a temperature equal to the CMB. The curvature of Space and Cosmological Constant We know that ρ(matter plus dark energy inside and between matter) = 8.28·10 -28 kg/m 3 . The mean density of the Einstein spacetime is ρ(background) = 1.1·10 28 kg/m 3 , then ρ(background)/ρ(matter plus dark energy inside and between matter) = 1.3·10 55 . This means that the Universe is extremely flat (k=0) because it is only a very small ripple on the background. Furthermore, there is more of the spreading of dark energy than of matter. Λ denotes the cosmological constant associated with dark energy. Dark energy also only insignificantly increases the density of the background, therefore, Λ is also practically equal to zero (Λ=0). Today we see that the Universe describes the flat Friedman model (k=Λ=p=0) which is also known as the Einstein-de Sitter model. Ω denotes the ratio of the mass density of a component to the total mass density (matter plus dark energy) without the background. Today visible baryonic matter is Ωb = 0.05, visible and dark matter is Ωm = ρm/ρ = 0.26, and dark energy is ΩΛ = 0.74. Today the mean local radial speed of baryonic matter is the same as dark energy. Some time in the future, collisions of matter with antimatter will take place within the partner of our Universe i.e. in the antiuniverse. This will signal the beginning of an end to our Universe. Cosmogony of the Solar System and Massive Spiral Galaxies By studying the four-neutrino symmetry, we can see that a virtual pion can interact at maximum with 2·4 32 neutrinos (this is because of the long-distance interactions of the weak charges of neutrinos) each placed in another typical neutron black hole (the TNBH). Firstly, we can say that our early Universe contained 2·4 32 the TNBH and secondly that smaller structures were the binary systems of protogalaxies which were composed of 2·4 16 the TNBH and having two cores (because of the virtual pairs) – of which each core contained 4 16 the TNBH (for example M31 was created in such manner) or one core which contained 4 16 the TNBH. The succeeding smaller structure i.e. the binary protosupercluster contained 2·4 8 the TNBH and had two cores (note that some globular clusters are oval-shaped) - such structures 69 have a mass approximately 3.3 million times greater than the sun or had one core (some globular clusters are spherical-shaped) - such structures have a mass approximately 1.6 million times greater than the sun. The next smaller structures were binary protoclusters which each contained 2·4 4 the TNBH and had two cores, and so on. Such binary protoclusters I refer to as solar clusters. The cores of the solar clusters evaporated intensively and as a result the following chemical elements arose: H, He, O, X-64 (which first transformed into iron), Y- 256 (which first transformed into plutonium Pu-244 and then into lead). From these gaseous rings arose. The Titius-Bode law defines the radii of the rings. The A/B for strong gravitational field has almost the same value as for strong interactions. If we assume that at the beginning of the evaporation of the solar cluster the constituents of this binary system were at a distance equal to the radius of the Pluto ring then the centre of the mass was the point of tangency between Uranus and the Uranus-twin rings. This means that the Saturn-twin ring was tangent to the Neptune ring as well (precisely the Saturn-twin ring split into two rings tangential in one place). The Dogon myth identifies that the Sun and the star Po-tolo was binary system, and notes that human life arose on the planet revolving around Po-tolo. In the distant past the star Sirius, covered an area near the Po-tolo and the binary system of these two stars then arose. The probability of such an event occurring is very low, therefore, the solar system is unique. The separation of the Sun and Po-tolo should occur when there were rings, not planets. This means that it was almost a miracle that the creation of the solar system took place. The Solar System The megachain of binary systems of neutrinos is the first stage in the evolution of photons that are emitted during nuclear transformations. The mass of it is mphoton-megachain = 4·4 32 ·mneutrino/(4·4 4 ) = 2.403·10 -50 kg. (175) The megachain composed of the binary systems of neutrinos has the unitary angular momentum on orbit having a radius equal to r(megachain)=1.464·10 7 m. The protonuclei Y- 256 accumulate on this orbit. They then they quickly decay into Pu-244 because these nuclei have a long half-life period. The angular momentum of the nuclei must also be conserved, therefore, the plutonium collected on the orbit has the following radius (from mvr=const., for nuclei we obtain r~1/m 2 ) Aconstituent-beginning + Bconstituent-beginning = r(plutonium) = 1.611·10 7 m. (176) The next, the protonuclei Y-256 emitted by the surface of the solar cluster which reached the plutonium orbit and then symmetrically fell into pieces analogically in a similar way to the group of four remainders inside the baryons. This occurrence leads to establishing the Titius- Bode law for a strong gravitational field. To calculate the radii of the orbits of the planets from the initial conditions we can use the following analogy. Using the formula for angular momentum we know that if the mass of the rings have changed very slowly over time then the evaporation of the solar cluster caused the radii of rings to increase inversely in proportion to the mass of the constituent of the binary system: mringvringrring=const., since mring=const. and vring=(GMconstituent/rring) 1/2 then Mconstituentrring=const. Mconstituent-beginning = 4 4 ·4.935·10 31 kg = 1.263·10 34 kg, (177) whereas Mconstituent-now = Msun = 1.99·10 30 kg. (178) The radii of the rings increased Mconstituent-beginning/Msun = 6348 times. (179) At the beginning, the radius of the Earth-ring was equal to rEarth-ring-beginning = Aconstituent-beginning + 2Bconstituent-beginning, (180) where Aconstituent-beginning=GMconstituent-beginning/c 2 . 70 From this for G=6.674·10 -11 m 3 kg -1 s -2 we obtain Aconstituent-beginning=0.9382·10 7 m. This means that for a strong gravitational field is Aconstituent-beginning/Bconstituent-beginning = 1.394. (181) Since the orbits have a certain width we can see that the A/B has almost the same value for a strong gravitational field (A/B=1.394) as for strong interactions (A/B=1.3898). The initial radius of the Earth-ring was rEarth-ring-beginning = 2.284·10 7 m. The present radius of the orbit of the Earth should be rEarth-ring-now = rEarth-ring-beginningMconstituent-beginning/Msun = 1.45·10 11 m. (182) This result accurately corresponds with the established interval (1.47·10 11 , 1.52·10 11 ) m. Kuiper’s belt is remnant of a supernova. The Oort’s cloud is remnant of the era of the big stars. Following the era of the big stars, a star arose in the centre of the solar system with a mass approximately 1.44 times greater than the mass of the Sun. After the explosion of this Ia supernova about 5 billion years ago, the Sun was created. During the explosion of the supernova, the following transformations took place Ni-56 Co-56 Fe-56. Firstly, nickel-56 appeared because this nucleus is the proton-neutron symmetrical nucleus. Such symmetry is always preferred during a very high temperature. Because symmetrical decays prefer very high temperatures then the following elements should be produced Fe-56 Si-28 N-14 or C-14 Li-7. The acting GASER produced nuclei that contained 64 nucleons so their symmetrical decay lead to the development of the following nuclei Ni-64 S-32 O-16 Li-8 He-4 D-2 H-1. Because the half-period for C-14 is approximately six thousand years, today we should detect many C-12 atoms. In regions having a high density of muons symmetrical fusion of three nuclei was possible. This is possible because the weak mass of a muon consists of three identical weak energies i.e. there are two neutrinos and the point mass of the contracted electron that have the same energies. Because nucleons and He-4 were (and are) the most abundant of all, the probability of the production of T-3 and C-12 was very high. Symmetrical fusion of two nuclei was also preferred because the simplest neutral pions consist of two carriers of the not entangled photons that have the same energy. This leads, for example, to the following fusions C-12 + C-12 Mg-24. We can say that muons and neutral pions are the catalysts for symmetrical fusions. The length of arms of the massive spiral galaxy If we assume that the core of a protogalaxy, composed of big neutron stars, emits protonuclei containing 1, 2, 4, 8, 16, 32, 64, 128, and 256 the neutrons, we can use the following analogy. The number 256 refers to the d=0 unit found in the Titius-Bode law. Consequently, the number 128 is for d=1, 64 for d=2, 32 for d=4, 16 for d=8, 8 for d=16, 4 for d=32, 2 for d=64, and 1 for d=128. The ranges of the protonuclei were inversely proportional to their mass and to the mass of the emitter i.e. to the mass of the protogalaxy core. We can see that the last number d has a value of 128. For a protogalaxy which contained two cores, for example M31 - Andromeda, contained 2·4 16 times the amount of typical neutron black holes then the initial radius of the ring rinitial, for which d=128, and had value (Ainitial=3.15·10 14 m, and Binitial=Ainitial/1.394) rinitial=2.92·10 16 m i.e. 3.1 light-years (3.1 ly). 71 If we assume that today, in the centre, the binary system of globular protoclusters exist (containing 2·4 8 the typical neutron black holes) then up through to the present day the radius rinitial increased 4 8 times i.e. the length of the spiral arm should be approximately 203 thousand light-years (62 thousand parsec). Size of globular cluster Using a similar method for calculating globular clusters containing one core, means we can establish that their diameter is equal to 79 light-years (this is if we assume that today there is a star in the centre which has a mass equal to the Sun). Using this formula on globular clusters containing two cores means we can calculate their diameter to be equal to 158 light-years (this is if we assume that today there is a binary system of sun-like stars in the centre). Summary Table 14 Structures of the Universe Structures of the Universe Mass Largest neutron star/black-hole 4.9·10 31 kg Massive galaxy 2.1·10 41 kg Group of binary systems of galaxies 1.7·10 42 kg Supergroup of binary systems of galaxies 6.8·10 42 kg Cluster of binary systems of galaxies 1.1·10 44 kg Supercluster of binary systems of galaxies 2.8·10 46 kg Chain of binary systems of galaxies 2.7·10 43 kg Superchain of binary systems of galaxies 1.7·10 45 kg Megachain of binary systems of galaxies 7.1·10 48 kg Table 15 Theoretical results Physical quantity Theoretical value Radius of the sphere filled with CMB and dark energy 20.8 billion ly Radius of the sphere filled with baryons 13.4 billion ly Mass of the Protoworld 1.961·10 52 kg Mass of visible and dark matter of the Universe 1.821·10 51 kg Hubble constant 47 km·s -1 ·Mps -1 Radius of the Protoworld 286.7 million ly Radius of the loop of the early Universe 191.1 million ly Number of binary systems of massive galaxies 4.295·10 9 Number of massive galaxies together with dwarf galaxies assuming there are twenty dwarf galaxies per 86 billion one massive galaxy Abundance of H-1 and He-4 following the era of big stars when we do not take into account the heavier elements Abundance of H-1 and He-4 in the present day when we do not take into account the heavier elements Abundance of visible and dark matter and dark energy inside the sphere filled with baryons 71% H and 29% He 75.5% H and 24.5% He Visible matter: approx. 5% Dark matter: approx. 21% Dark energy: approx. 74% 72 Table 16 Theoretical results Physical quantity Theoretical value λν/λT for black body 1.7195 Ω 0.02 Number of photons in a cubic meter 300 million Anisotropy power for a quadrupole 151 μK 2 Anisotropy power for megachains 934 μK 2 Maximum anisotropy power for mass density fluctuations 4980 μK 2 Multipole moments for maximums of the anisotropy power associated with inflows of dark energy 256, 512, 768, 1479 Multipole moments for maximums of the E polarization spectrum 128, 256, 384, 740 Maximum anisotropy power for scalar E-mode polarisation 37 μK 2 Amplitude of the temperature fluctuations for the CMBR on 1.11944·10 -5 angular scale of 11 degrees A/B for strong gravitational field 1.394 Radius of orbit of the Earth 1.45·10 11 m Length of arms of the M31 203,000 ly Size of globular clusters 79 ly or 158 ly References [1] M W Zwierlein, J R Abo-Shaeer, A Schirotzek, C H Schunck, and W Ketterle; Vortices and superfluidity in a strongly interacting Fermi gas; Nature 435, 1047-1051 (2005). 73 Four-shell Model of an Atomic Nucleus On the basis of the four phase transition of the Newtonian spacetime and the Titius-Bode law for strong interactions, in this section I shall analyse the interior structure of atomic nuclei. Volumetric binding energy of a nucleus per nucleon The sum of the mass of the free relativistic charged and neutral W(d=1) pions is 424.403 MeV. The nucleons that an alpha particle is composed of, occupies the vertices of the square with the diagonal of the square equal to A+4B. The exchanged pions are most frequently located in the centre of this square. As A/r=v 2 /c 2 , mW(+-o),d=mpion(+-o)/(1-(v 2 /c 2 )) 1/2 , and the nucleon-pion distance is (A+4B)/2, the sum of the mass of the charged and neutral W pions is 394.499 MeV. The distance between the mass of the unbound and bound states is 29.904 MeV per two nucleons. When side of the square is Side = (A + 4B)/2 1/2 , (183) then the volumetric binding energy per nucleon is 14.952 MeV. Each nucleon occupies a cube which has a side equal to ac=(A+4B)/2 1/2 =1.91258·10 -15 m. We can assume that the nucleons inside a nucleus are placed on the concentric spheres where the distances between them equal ac. This means that the radius of the first sphere is equal to ac/2. This, therefore, leads to the following formula for the radii of the spheres (they are not the radii of the nuclei because the spheres have a thickness) rsn = (n - 0.5)ac where n=1, 2, 3, 4. (184) The maximum number of nucleons placed on a sphere is An = 4(n - 0.5) 2 , (185) followed by, A1=3.14, A2=28.27, A3=78.54 and A4=153.94. If we round these figures to the nearest even number (nuclei containing an even number of nucleons are more stable), we obtain the following series: 4, 28, 78, and 154. This means that on the first four wholly filled spheres there are 264 nucleons. As we see by the first two numbers, the sum of the first and third and the result of subtracting the third and second, and the fourth and second numbers, we can see that the result is the well-known magic numbers of 4, 28, 82, 50, 126. This cannot be a coincidence which confirms that we are on the right path in order to build the correct theory of an atomic nucleus. When the number of neutrons becomes equal to one of the magic numbers then transitions of the protons between higher and lower spheres occurs. This increases the binding energy of a nucleus. To calculate the electric radius of a nucleus (i.e. the radius of a nucleus obtained in experiments based on the bombardment of a nucleus by electrons) we have to add the electric radius of the nucleon to the radius of the last sphere. Since the charged pions in the nucleons are placed in the d=1 state the electric radius is, therefore, equal to A+B=1.19927·10 -15 m. Furthermore, the electric radius of the nucleus An=110 is rje(An=110) = 2.5ac + (A + B) = 5.98·10 -15 m. (186) If we define the electric radius by using the formula rje = roeAn 1/3 , (187) then for a nucleus containing An=110 nucleons we obtain roe=1.25 fm. The value roe changes from 1.28 fm for An=32 to 1.23 fm for An=264. Since the range of strong interactions of a nucleon is A+4B the radius of a nucleus for strong interactions (i.e. the radius of a nucleus obtained during experiments based on the bombardment of a nucleus by nucleons having energy of approximately 20 MeV) is greater 74 rjj(An=110) = 2.5ac + (A + 4B) = 7.49·10 -15 m. (188) If we define such a radius by using the formula rjj = rojAn 1/3 , (189) then for a nucleus containing An=110 nucleons we obtain roj=1.56 fm. The value roj changes from 1.76 fm for An=32 to 1.47 fm for An=264. Model of dynamic supersymmetry for nuclei From [1] results we can see that the nucleons in a nuclei are grouped in following way a = 2 protons and 2 neutrons, b = 3 protons and 5 neutrons, c = 3 protons and 4 neutrons, d = 1 proton and 1 neutron. The new theory explains the above as follows a) A proton exists in two states with the probabilities: y=0.50838 and 1-y=0.49162. If we multiply these probabilities by two (for a deuteron) or by four (for an alpha particle), we obtain the integers (approximately) because the probabilities are that y and 1-y have almost the same values. b) A neutron exists in two states with the probabilities: x=62554 and 1-x=0.37446. If we multiply these probabilities by eight, we obtain in the integers (5.004 i.e. approximately 5, and 2.996 i.e. approximately 3). The 8 is the smallest integer which leads to integers (in approximation). c) For a system containing 50% of a) and 50% of b), we obtain the following probabilities (x+y)/2=0.56696 and (1-x+1-y)/2=0.43304. This factor is equal to 7 (3.969 i.e. approximately 4, and 3.031 i.e. approximately 3). A nucleus chooses a mixture of the a), b), c), d) and states in such a manner which binding energy was the greatest. The 2p2n groups appear when the interactions of protons dominate whereas the 3p5n groups appear when the interactions of neutrons dominate. The energy of the Coulomb repulsion of protons To calculate the Coulomb energy of the repulsion of protons for wholly filled spheres we can use the following analysis. Since wholly filled spheres have a spherical symmetry, the Coulomb energy of the repulsion of a proton placed on the surface of the last wholly filled sphere per one nucleon equals Ecn/An = (kZe 2 /rsn)(Z/An), where k=c 2 /10 7 . (190) If we express the energy in MeV then we obtain Ecn/An[MeV] = 0.753Z 2 /(An(n - 0.5)). (191) If Z=2, An=4 we would obtain 1.5 MeV, if Z=16, An=32 we would obtain 4.0 MeV, if Z=46, An=110 we would obtain 5.8 MeV, and for Z=104, An=264 we would obtain 8.8 MeV. Theory of the deuteron The magnetic moment of a deuteron is only slightly lower than the sum of the magnetic moments of a proton and a neutron. This suggests that the p-n binary system is bound for short times in a region having a high negative pressure. We can assume that negative pressure appears due to the exchanges of the free neutral pions. The free neutral pions appear due to the weak interactions because then pions can run out from the strong field. Since in neutron is the resting neutral pion in the H o Z o π o state then emissions and absorptions of neutral pions do not change magnetic moment of neutron. We can calculate probability of emission of the neutral pion by a proton. Due to the W o Z o π o transitions, the emission of neutral pion by 75 proton changes its magnetic moment. In such transition, the angular momentum of the relativistic W o cannot change. This condition causes that during the emission of the pion π o the electromagnetic loop Z o (spin speed of this loop is equal to the speed c) is in the d=4 tunnel, i.e. in the last tunnel for strong interactions, because then the angular momentum of W o d=1 is close to the angular momentum of Z o . The ratio of these two angular momentums is u=0.9575329. Since probability of the H + W o state is y=0.5083856 and the ratio of the coupling constants for the weak and strong interactions is αw(proton)=0.0187228615 then probability of emissions of the free neutral pions by a proton is z=yαw(proton)u=0.009114214. The probability of the H + W o and H + Z o π o states of proton in the neutron-proton bound state is w=y+z whereas of the H o W + state is 1-w. This leads to following the deuteron-nuclear magnetic moment ratio 0.85230. The scattering length is atrip = 2(A + 4B)(1 - z) + 3(A + 4B)z = (2 + z)(A + 4B) = 5.4343 fm. (192) In nucleons, the relativistic pions are in the d=1 state. Since pions consist of the large loops that have radius equal to 2A/3 the effective range for this state is A+B+2A/3. The effective range of deuteron is rtrip = (A + B + 2A/3)(1-z) + 2(A+4B)z = 1.6984 fm. (193) To obtain the binding energy for a deuteron we must take into account the electric interactions in the triplet states (spin=1). The W - W + interact from distance equal to 2πA/3 for a period equal to 1-w. The H + protonW - interact from L for a period equal to x-(1-y), where L = [(2πA/3) 2 + (A + B - 2A/3) 2 ] 1/2 = 1.63491 fm. (194) The H + protonH + neutron interact from 2πA/3 for a period equal to x-(1-y). The H + neutronW + proton interact from L for a period equal to (1-w). This leads to the proton-neutron electric attraction in a deuteron equal to ΔEem = e 2 (x + y + w -2)(1/L – 1/(2πA/3))/(10 7 ·Z8) = 0.0366111 MeV, where Z8=1.78266168115·10 -30 kg/MeV. Therefore, the binding energy of deuteron emitting two free neutral pions and bound due to the volumetric binding energy equal to ΔEvolumetric=29.903738 MeV is ΔEnp = (2mpion(o) - ΔEvolumetric)z + ΔEem = 2.22428 MeV. Binding energy of a nucleus and the path of stability In the alpha particle, there are two possible states that I refer to as the square and deuteron states. The square state leads to the volumetric binding energy per nucleon (i.e. 14.95 MeV) and the electric repulsive force equal to 1.5 MeV per nucleon (see formula (191)). In the deuteron state, all linear axes of the tori of nucleons overlap so one deuteron and two free nucleons or two deuterons arise. If we assume that the probability of both states is equal then for the deuteron state we obtain the total binding energy to be equal to 3.33 MeV. If we also assume that the probability of the square and deuteron states to be equal then the binding energy per nucleon in the alpha particle is E(He-4) = (4·14.95 – 6 + 3.33)/8 = 7.1 MeV. (195) When the electric repulsive force per nucleon is lower than the total binding energy for two separated deuterons (E 76 When the electric repulsive force per nucleon is higher than the total binding energy for two separated deuterons then the neutrons dominate i.e. the groups containing five neutrons and three protons. This is because the following formula is satisfied x/(1 - x) = 5/3. (196) Table 17 Main path of stability of nuclei ZXA a b c d ZXA a b c d ZXA a b c d 1H1 36Kr84 9 6 71Lu175 10 16 1 2He4m 1 37Rb85 9 5 1 1 72Hf180 9 18 3Li7 1 38Sr88m 10 6 73Ta181 9 17 1 1 4Be9 1 1 39Y89 10 5 1 1 74W184 10 18 5B11 1 1 40Zr90m 12 5 1 75Re187 9 18 1 6C12 3 41Nb93 11 5 1 1 76Os192 8 20 7N14 3 1 42Mo98 10 7 1 77Ir193 8 19 1 1 8O16m 4 43Tc97 12 5 1 1 78Pt194? 10 19 1 9F19 3 1 44Ru102 11 7 1 79Au197 9 19 1 1 10Ne20 5 45Rh103 12 6 1 80Hg202 8 21 1 11Na23 4 1 46Pd106 12 7 1 81Tl205 7 21 1 1 12Mg24 6 47Ag107 13 6 1 82Pb208m 8 22 13Al27 5 1 48Cd114 10 9 1 83Bi209 8 21 1 1 14Si28 7 49In115 11 8 1 84Po209 10 20 1 1 15P31 6 1 50Sn120m 10 10 85At210 12 20 1 16S32 8 51Sb121 10 9 1 1 86Rn222 5 25 1 17Cl35 7 1 52Te130 6 13 1 87Fr223 6 24 1 18Ar40 6 2 53I127 10 10 1 88Ra226 6 25 1 19K39 8 1 54Xe132 9 12 89Ac227 7 24 1 20Ca40m 10 55Cs133 9 11 1 1 90 Th 232 6 26 21Sc45 7 1 1 1 56Ba138 8 13 1 91Pa231 8 24 1 22Ti48 8 2 57La139 9 12 1 92U238 5 27 1 23V51m 7 2 1 58Ce140 11 12 93Np237 7 25 1 1 24Cr52m 9 2 59Pr141 11 11 1 1 94Pu244 5 28 25Mn55 8 2 1 60Nd142 13 11 1 95Am243 7 26 1 26Fe56 10 2 61Pm147 11 12 1 96Cm247 6 27 1 27Co59 9 2 1 62Sm152 10 14 97Bk247 8 26 1 28Ni58m 12 1 1 63Eu153 10 13 1 1 98Cf251 7 27 1 29Cu63 10 2 1 64Gd158 9 15 1 99Es254 7 28 1 30Zn64 10 2 1 1 65Tb159 10 14 1 100Fm253 9 26 1 1 31Ga69 9 3 1 1 66Dy164 9 16 101Md258 8 28 1 32Ge74 8 5 1 67Ho165 9 15 1 1 102No256 12 26 33As75 9 4 1 68Er166 11 15 1 103Lr256 14 25 34Se80 8 6 69Tm169 10 15 1 1 104Ku260 13 26 35Br79 10 4 1 70Yb174 9 17 1 ZXA – denotes the atomic-number/symbol-of-element/mass-number a=2p+2n=2He4; b=3p+5n; c=3p+4n=3Li7; d=p+n=1D2 ? - denotes the discrepancy with the results in the periodic table of elements m – denotes magic-number nucleus This principle, in particular, satisfies nuclei which contain 2k(3p+5n) more nucleons than the Ca-40 10(2p+2n): Fe-56 [(Ca-40)+2(3p+5n)], Ge-72, Sr-88, Ru-104, Sn-120, Ba-136, Sm- 152, Er-168, W-184, Hg-200 [(Ca-40)+20(3p+5n)]. Comments relating to the table titled ‘Main path of stability of nuclei’: 77 The consistency with the experimental data is very high – only one result is inconsistent with experimental data. The abundance of the 78Pt194 should be slightly higher than the 78Pt195 with needs revising. The mean number of the ‘a’ groups for nuclei greater than the 17Cl35 is nine – this is consistent with the theoretical value An=36. Deviation from the mean value is significant ±4a. Within light nuclei the a groups dominate whereas in heavy nuclei the b groups dominate. This is because the binary system of the 2p2n can create the 4 deuteron bonds (which leads to additional binding energy of approximately 1.1 MeV per nucleon) whereas within the 3p5n only 3 deuteron bonds are created (which leads to additional binding energy of approximately 0.8 MeV per nucleon). The difference between the binding energy is approximately 0.3 MeV per nucleon. Notice that in comparison with the 2p2n groups, the 3p5n groups significantly reduce electric repulsion in heavy nuclei. At maximum, there can be only one intermediate c state and only one d state having a low binding energy per nucleon. The smallest magic numbers (2 and 8) are associated with the four-neutrino symmetry D=4 d where d=1, 2 whereas the D denotes the mass numbers of the smallest magic nuclei D=4, 16. The magic number 20 is associated with the transition from proton domination to neutron domination. The 20Ca40 is the greatest nucleus only composed of the 2p2n groups. The other magic numbers (28, 50, 82, and 126) are associated with the transitions of the protons between the higher shell of nucleus and the lower shell(s). This reduces the mean electric repulsive force (see formula 191). We should take into account that on the filled inner shells of the nuclei the numbers of protons and neutrons have approximately the same value. Detailed calculations leads to the binding energy associated with the transitions to be equal to approximately 0.23-0.25 MeV per nucleon. Among the most abundant isotopes collected in the table titled “Main path of stability of nuclei”, are only 10 elements with an odd number of neutrons. Two are the very light elements 4Be9 and 7N14 and eight are the radioactive elements. This suggests that there is a pairing of neutrons for strictly determined distances between them. In the light elements, neutrons are too close whereas on the surfaces of the radioactive elements they are too far away. Neutrons have electromagnetic structures and when they are very close to one another, electrostatic repulsion appears. When the distance between neutrons is sufficiently high we can neglect the electrostatic repulsion whereas the attraction of neutrons as result of the exchange of photons cannot be neglected. Electromagnetic attractions of neutrons have maximum distances equal to A+8B and 2πA where the A denotes the radius of the equator of the core of baryons. These two distances are respectively about 4.7 fm and 4.4 fm. The diameter of the nuclei 4Be9 and 7N14 are approximately equal to these distances, however, in light nuclei the neutrons are most often found in the centre of a nucleus. This means that the pairing of neutrons is sometimes impossible in these nuclei. We can also calculate the lower limit for the number of nucleons for the radioactive nuclei. This is when the electric repulsive force per nucleon is higher than the binding energy per nucleon in the alpha particle. Using formula (191) for the Bi-209, we obtain that the electric repulsive force equals 7.09 MeV, therefore, the An>209 defines the lower limit. On the basis of formulae (191), (195) and (196) we can calculate the binding energy per nucleon for select nuclei E(O-16) = (7.1 + 3.33/4) = 8.0 MeV, (197) E(Fe-56) = (26·8.0 + 6(14.95 - 4) + 24·(14.95 - 5.8))/56 = 8.8 MeV. (198) When we neglect the proton transitions for Pb-208, we obtain E(Pb-208) = (26·8.0 + 6(14.95 - 4) + 78(14.95 - 5.8) + 98(14.95 - 8.8))/208 = 7.65 MeV.(199) The proton transitions increase the binding energy by approximately 0.25 MeV. We can see that the approximate positive obtained results reflect the experimental curve. The binding energy per nucleon depends on the internal structure of the nucleons, the 78 volumetric binding energy, the Coulomb energy of repulsion and the transitions of protons associated with the magic numbers. Summary We obtain very positive theoretical results in only taking into account the internal structure of nucleons, volumetric binding energy, electric repulsion of nucleons, and the transitions of protons between the shells. Table 18 Theoretical results Physical quantity Theoretical value Volumetric binding energy per nucleon 14.952 MeV Magic numbers 4, 28, 50, 82, 126 Coefficient roe for radii of nuclei for An=32: 1.28 fm electromagnetic interactions An=264: 1.23 fm Coefficient roj for radii of nuclei for An=32: 1.76 fm strong interactions An=264: 1.47 fm Groups of nucleons in nuclei dominants: 2p+2n; 3p+5n accessory: 1p+1n; 3p+4n Binding energy of a deuteron 2.22428 MeV Electric p-n attraction in a deuteron 0.0366111 MeV Deuteron-nuclear magnetic moment ratio 0.85230 n-p(triplet) scattering length 5.4343 fm n-p(triplet) effective range 1.6984 fm Upper limit for the domination of protons Mean value: An=36 (experimental result is >209) An>209 Binding energy per nucleon for He-4 7.1 MeV Binding energy per nucleon for O-16 8.0 MeV Binding energy per nucleon for Fe-56 8.8 MeV Binding energy per nucleon for Pb-208 7.9 MeV References [1] P. Van Isacker, J. Jolie, K. Heyde and A.Frank; Extension of supersymmetry in nuclear structure; Phys. Rev. Lett. 54 (1985) 653. 79 Mathematical Constants In this chapter, I will show that the everlasting theory leads to the mathematical constants applied in physics. Theories that describe the same but contain more parameters are the worse theories. Mathematical constants applied in physics if they have not a physical meaning are the parameters as well. To formulate the ultimate theory, we should first define a fundamental spacetime and identify that the physical properties of such a spacetime leads to the mathematical constants associated with physics (i.e. to the number e=2.718…, the π=3.1415…. and the imaginary unit equal to the sqrt(-1)). The properties of such a fundamental spacetime should also lead to physical constants (i.e. to the G, h, c, e, rest mass of electrons and pions – the other physical quantities we can calculate once we know these seven parameters). I derived the mentioned above physical constants and a few hundred other physical quantities from the properties of the fundamental Newtonian spacetime (the six parameters) and the Einstein spacetime (the one additional parameter because the Einstein spacetime arose due to spontaneous phase transitions of the Newtonian spacetime). The physico-mathematical relations are very important in order to decipher the structure of nature. In physics the mathematical constants e=2.7182…, the number π=3.1415… and the imaginary unit equal to the sqrt(-1) appear almost everywhere. This must have a very deep meaning. Ground state of nature leads to the e=2.718.... In the proceeding section, I will prove that the ground state for the whole of nature leads to the Newton definition of the mathematical constant e=2.718…. e=2.718….=1/0!+1/1!+1/2!+1/3!+1/4!+1/5!+….=1+1+1/2+1/6+1/24+1/120+…. P.Plichta [1] described how the number e-1=1.718… is associated with a random sampling and theory of combinations. My interpretation of the expression 1/0!=1 is as follows. When there is no ball in a box, there is also a possibility that we will draw nothing i.e. the nothing (i.e. 0!) leads to one possibility (i.e. 1). This means that there is a natural explanation for the 0!=1 i.e. a natural explanation as to why the number 1 appears twice. What is the physical meaning of the number e=2.718… i.e. how does this number lead to my scheme of nature i.e. what are the relationships between the e=2.718… and the succeeding levels of nature following from the phase transitions of the Newtonian spacetime? I established that the phase transitions of the Newtonian spacetime composed of tachyons that lead to stable objects i.e. to the closed strings, neutrinos, cores of baryons and protoworlds. In order to describe the position, shape and motions of these objects with a rotating spin we need phase spaces containing the following numbers of co-ordinates and quantities N (see the formula below Table 4) N = (d - 1) · 8 + 2. When spin does not rotate then the number 2 in this formula disappears. This means that for each stable object there are two possibilities i.e. the ground state when spin does not rotate and the excited state when spin rotates. The d=0 is for tachyons, d=1 is for rotating spin, d=2 is for closed strings, d=4 is for neutrinos, d=8 is for cores of baryons and d=16 is for protoworlds. Now we can interpret the numbers 1, 1, 2, 6, 24, 120, which appear in the definition of the number e=2.718…..These are the numbers which characterize the phase spaces of objects appearing in the ground state of nature. I will also show that the numbers-factorials define 80 spatial and time dimensions in new way. The above series of numbers can be written as follows: 1, 6, 24, 1, 2 and 120. 1. The 1=0!=0D at the beginning of the series means that there is one ideally empty volume i.e. the 0D volume. The phase space of the Newtonian spacetime contains six elements (precisely the -6 suggesting that it is the imaginary spacetime). The 6=3!=3D means that the 0D volume is filled with 3D objects described by the six coordinates and quantities. There are the three co-ordinates (the x, y, and z), one mean radius of the tachyons, one mean angular speed associated with the spin of tachyons and one mean linear speed of tachyons associated with time in the fundamental/Newtonian spacetime. We can note that 3+1+1+1=6. The spin of tachyons is very small in comparison to the halfintegral spin of the closed string. We can also see that the 0!=0D and the 3!=3D, describes the phase space of the fundamental/Newtonian spacetime/ideal-gas. This is the 0D volume filled with the free 3D tachyons. 2. The number 24 describes the phase space of a non-rotating-spin neutrino. The 24=4!=4D shows that the spacetime composed of free non-rotating-spin neutrinos which is the 4D spacetime. The ground state of the Einstein spacetime consists of the non-rotating-spin binary systems of neutrinos. There are also in the ground state of it opened threads which are composed of the binary systems of neutrinos i.e. there are the 1=1!=1D objects. These opened threads lead to fractal structures (among other things also to the mental world). There are also surfaces which appear similar to the Ketterle surface for strongly interacting gas i.e. the 2=2!=2D objects leading to the tori of electrons and the cores of baryons. We see that the 4D, 2D and 1D objects are the constituents of the ground state of the Einstein spacetime. In such a spacetime, there are possible quantum effects. The known particles are the excited states of the Einstein spacetime. Time in the Einstein spacetime is associated with the speed of light c and this quantity is among the 24 co-ordinates and quantities. For rotating-spin neutrinos and binary systems of neutrinos, the number 26 is characteristic of appearing in the string/M theory. This number does not appear within the definition of e=2.718…. as its definition only reflects the ground state of nature as a whole. 3. The phase space of the ground state of the Protoworld contains 120 co-ordinates and quantities. What is the meaning of the equation 120=5!=5D? This means that inside a 4D object a loop having a one dimension appears. Similarly, a large loop appears inside the cores of the baryons responsible for the strong interactions. We can say that the 4D Protoworld produced a 1D loop i.e. the early Universe. The evolution (i.e. cosmology) of the Protoworld and the early Universe, I described earlier in Chapter titled “New Cosmology”. This description leads to the today Universe. We can see that in this scheme the phase spaces of the closed strings (i.e. the 8 or 10) and of the cores of baryons (i.e. the 56 or 58) do not appear. The almost all closed strings are the components of the neutrinos so they are not a part of the ground state of nature. Due to the internal structure of the cores of baryons, they are always ‘dressed’ into the pions. This means that the cores of the baryons also are not the ground state of the Einstein spacetime. All of the observed particles are the excited states of the ground state of the Einstein spacetime. This means that phase spaces of these particles should not appear in the Newton definition of e=2.718…. There are only two spacetimes: the Newtonian spacetime (which leads to Einstein 81 gravity) and the Einstein spacetime (which leads to electromagnetism and quantum effects but also to the weak and strong interactions having finite ranges of interactions). The two basic elements of the Everlasting Theory lead to the mathematical constant e=2.718… i.e. the phase transitions of the fundamental/Newtonian spacetime and the Titius- Bode law for the strong and gravitational interactions. What can be found in the Titius-Bode law for the strong interactions is (see the formulae (10) and (31)) A = 0.6974425 fm, B = 0.5018395 fm. If we change these values, we obtain incorrect values for, for example, the mass of nucleons and the magnetic moments of nucleons. The theory is very sensitive for each change in value of the parameters associated with the properties of the Newtonian and Einstein spacetimes. We can see that the following expression is close to the e=2.718… x = 1 + (A + B)/A = 2.71954. On other hand, the phase spaces of the objects in the ground state of nature leads to the following number y = 1 + 1 + 1/2 + 1/6 + 1/24 + 1/120 = 2.71667. The mean value is then very close to e=2.7182… z = (x + y)/2 = 2.7181. There cannot exist a stable cosmic object greater than the Protoworld (120=5!=5D) leading to the 720=6!=6D. This is because the time it takes to create such object surpasses the lifetime of it. When we add the 1/720, the z then differs far more from the e=2.7182…. – we, therefore, obtain z’ = 2.71880. It is evident that my theory is extremely sensitive to any changes. The free tachyons have broken contact with the rest of nature. This leads to conclusion that in the ground state of nature the non-rotating-spin neutrino-antineutrino pairs are the most important particles i.e. most important is the phase space containing 24 numbers. The grouping of the natural numbers in 24 sets leads to the prime number cross and to many physico-mathematical relations (see Chapter titled “Fractal Field”). π=3.1415… also proves that the Everlasting Theory is correct Similarly to the number e=2.718…, the number π is also extremely common in physics. This means that the number π should have very significant physical meaning. The constancy of π=3.1415… suggests that the smallest stable objects (i.e. an object appearing during the first phase transition of the Newtonian spacetime) should be inflexible circles (for a circle in a curved spacetime or for a flexible closed string, the ratio of the circumference to the size is not equal to π). Because 1 1 =1 2 =1 3 =1, then the mass of a closed string is directly in proportion to its circumference but to its area and volume as well. Mass are directly proportional to number of the closed strings they consist of – these strings are inside the neutrinos in the neutrino-antineutrino pairs that the Einstein spacetime consists of. The closed strings are inflexible (i.e. they are always an ideal circles) and consists of spinning tachyons. Only the inflexible closed strings lead to the constancy of the gravitational constant. There also appear other coincidences associated with the number π. For example, the mass that is responsible for the weak interactions in the centre of the cores of baryons (approximately 424.1 MeV) is π times greater than the mass of the neutral pion (approximately 135.0 MeV). What is the physical meaning of the imaginary unit ‘i’? The Everlasting Theory leads to an imaginary unit. The Newtonian spacetime on the circle inside the closed string has entirely broken contact with the points lying on the plane that the closed string lies, outside of. It looks as if the closed 82 string cut the circle out from the Newtonian spacetime. We are able to call such a circle the imaginary/absent circle. Furthermore, due to the infinitesimal spin of the tachyons, the closed string has internal helicity – i.e. it produces a real jet (real axis) within the Newtonian spacetime in a direction perpendicular to the imaginary circle. If we assume that the area of such an imaginary/absent circle is -π (the sign “-” relates to the word “absent”) then the radius of such a circle can be defined by i = sqrt(-1). Summary The phase spaces of the objects in the ground state of nature (i.e. in the ground states of the Newtonian and Einstein spacetimes and in the ground state of the field composed of the protoworlds) and the Titius-Bode laws for strong and gravitational interactions lead to the mathematical constant e=2.718…. The inflexible closed string leads to the π=3.1415… and to the imaginary unit. Furthermore, we can see that the theory which started from the phase transitions of the Newtonian spacetime (1997) and the Titius-Bode law for strong interactions (1985) is the lacking part of the ultimate theory of nature because the mathematical constants e=2.718…, π=3.1415…, and the imaginary unit i=sqrt(-1) and the physical constants there are coded. Such theory must be correct because this theory shows that values of the mathematical and physical constants depend on properties of the fundamental/Newtonian and Einstein spacetimes. I proved that the origin of mathematics and physics is associated with the properties of the Newtonian spacetime that is composed of internally structureless tachyons that have a positive inertial mass. Such physico-mathematical theory needs only 7 parameters. References [1] P Plichta; God's Secret Formula: Deciphering the Riddle of the Universe and the Prime Number Code. 83 Fractal Field It is very important to unite particle physics with the theory of chaos via a single field. In the following section, I will attempt to show which properties should have a physical field and that the creation of fractals was possible. The physical meaning of the complex number The formula i(imaginary unit)=exp(iπ/2) shows that the imaginary plane is perpendicular to the real axis. Let us cut out the circle that has a radius equal to i from the imaginary plane. The area of the non-existent circle equals –π. Let us assume that the axis x is the real axis whereas the plane defined by the axes iy and iz is the imaginary plane. Let as also assume also that such a mathematical object is moving along the axis iy and that the real axis x rotates around the axis iy. Using those assumptions the arising wave along the axis iy, associated with the interval on the real axis x and the interval on the axis iz, describes the frequently applied Euler formula exp(iφ)=cosφ+isinφ. Are we able to define a physical object for such a moving mathematical object? Assume that there is a moving and spinning closed string in existence which has internal helicity and which is placed in the Newtonian gas-like spacetime. Due to the sufficiently high internal helicity and shape of the closed string, the winds created around the closed string separate from it on the internal equator of the closed string because in the pressure of the Newtonian spacetime/gas these points are lowest. The winds that are separated are the jets perpendicular to the plane defined by the closed string. The internal equator of the closed string is equivalent to the boundary/edge of the nonexistent/cut-out imaginary circle whereas the jet is equal to the real axis x and the cut out circle is equal to the imaginary surface. If the jet of such a closed string rotates around the direction of the motion then the aforementioned Euler formula describes the arising wave. The cut out imaginary circle has broken contact with closed string i.e. such circle is ideally flat. The gravitational field and the jets in the Newtonian spacetime are the real parts in this spacetime. Gravitational field consists of the flat imaginary part (i.e. the Newtonian spacetime) and the part having a gradient so the gravitational field is the complex volume. Fractal field We can describe the behaviour of the binary system of neutrinos in a similar way to the closed string. I call a fractal field a field that consists of threads that are composed of nonrotating-spin binary systems of neutrinos where the spins are tangential to the threads. The divergent or convergent arrangements of the spins of the binary systems of neutrinos (i.e. of the real axes x) lead to the particle physics whereas the single file arrangement of the spins (i.e. the single file arrangement of the complex planes) leads to fractal geometry. The Titius-Bode law and bifurcation The chaos game method [2] leads to the Sierpinski triangle associated with the Pascal triangle [3]. The sum of the numbers in the succeeding lines of the Pascal triangle are equal to d=1, 2, 4, 8, 16, 32, 64, 128, 256, and are characteristic for the Titius-Bode law Rd = A + dB, (200) where A/B=1.39. This means that the Titius-Bode law is somehow associated with fractal geometry i.e. travelling half-distances, distribution of sources of interactions, and the creation of consecutively smaller self-similar physical objects due to symmetrical decay (bifurcation). How would the fractal field associated with the Titius-Bode law appear? Assume that the origin of the orbits defined by the Titius-Bode law is associated with the creation of physical rings around the neutron black hole. The temperature was sufficiently 84 high enough to realize the symmetrical decays of the atomic nuclei. When we begin with a nucleus that is composed of 256 nucleons, then 8 symmetrical decays are possible. On other hand, however, in following the Uncertainty Principle, this leads to the conclusion that the ranges of the objects are inversely proportional to their mass. Assume the following model is possible: The nuclei that contain 256 nucleons appear on a circle (the distribution of the sources) and have the radius r=A. The range of such nuclei would be B. At distance from B to the circle are the first symmetrical decays – there appear two nuclei that each contain 128 nucleons. One part of the decay is moving towards the circle whereas the other is moving in the opposite direction. When the first part reaches the circle, the other stops (at a distance 2B from the circle) and subsequently the second symmetrical decay is realized, and so on – it is the mechanism associated with travelling half of the distance between a circle and the place of the next symmetrical decays. Moreover, within the symmetrical decays smaller and smaller self-similar physical objects appear i.e. smaller and smaller atomic nuclei. As a result, we can conclude that fractal geometry may be possible due to phenomena similar to the phenomena that lead to the Titius-Bode law. Creations of fractals in the fractal field How is the fractal field associated with the fractal geometry? Assume that in the fractal field all circular electric currents and those inside atoms and brains as well, create concentric quantized circles. The dipoles in a circle are oriented in such a way that the spins of the dipoles are tangential to the circle. Such circles are very stable objects for radii greater than a lower limit. The tangle of the closed threads composed of weak dipoles and produced by a tangle of circular electric currents leads to a stable ‘soliton’ in the fractal field. Due to the current decays and circuit breakers (for example neurons can also do this), smaller and smaller self-similar ‘solitons’ are produced. The smaller and smaller selfsimilar ‘solitons’ tangle themselves because they have identical fragments which causes an attractive force to appear – and subsequently there appears a fractal. Due to the exclusion principle, the ‘solitons’ in a fractal, should be angled differently, however, the fractals must always be symmetrical because the binding energy is at its highest then. The attractive force also acts on fractals that contain identical fragments. We can see that consequently a conflict for the domination of identical fragments takes place. Such processes are possibly responsible for the free will. We see that the theory of chaos is associated with the fractal field composed of moving threads that are composed of non-rotating-spin dipoles. There is a possibility that the fractals that appear in such a field can very slowly modify the genetic codes. How to group natural numbers to obtain a special number theory consistent with the Everlasting Theory The Everlasting Theory begins from the four possible phase transitions of a gas-like Newtonian spacetime and the Titius-Bode law for strong interactions. The Newtonian spacetime consists of the internally structureless tachyons i.e. the mass of tachyons packed to the maximum is directly in proportion to the size to the power of three. Because of the dynamic viscosity of the liquid that is composed of maximum packed tachyons, there appear closed strings that have identical mass. In such closed strings, the tachyons arrange themselves in an Indian file. For such a string, the mass is directly in proportion to the length (one dimension) of the closed string but also to its surface (two dimensions) and volume (three dimensions). Because 1 1 =1 2 =1 3 =1, we can assume that the number 1 represents the mass of the fundamental closed string. Due to the phase transitions of such closed strings, tori arise i.e. objects arise that have a mass directly in proportion to their surface i.e. to its size to the power of two. 85 The transition from the maximum packed tachyons (3D; its mass is directly proportional to the size to the power of three) to closed strings (1D; its mass is directly proportional to the length), suggests the production of finite number of sets containing the natural numbers in such a way that a set containing a prime number should contain also the number equal to this prime number to the power of three. Following such split, we obtain a grouping of the natural numbers in 24 infinite sets. If each concentric circle contains 24 succeeding natural numbers then on first circle there would be 10 prime numbers (the number 1 is the special prime number, 2, 3, 5, 7, 11, 13, 17, 19, and 23). There also appear 8 radii that contain many prime numbers that have at the beginning the following prime numbers: 1, 5, 7, 11, 13, 17, 19, and 23 (we can see that nature behaves as if the number 1 was a prime number). For example, the radius starting from the prime number 13 also contains the following prime numbers: 37, 61, 109, 157, and so on. On this radius also lies the numbers 13 3 , 37 3 , and so on. P. Plichta [4] referred to the taking place of such a division of the natural numbers for the first time as the prime number cross. Plichta obtained such a division from the requirement that a radius starting from number 1 also contained numbers equal to the prime numbers to the power of 2. I obtained an identical division on using the Everlasting Theory i.e. on the basis of the gaslike-Newtonian-spacetimeclosed-strings transitions. The radius starting from number 1, containing squares of prime numbers, represents the closed-stringstori transitions. The Everlasting Theory identifies that there is far more physico-mathematical analogy than P. Plichta described. For example, the ten prime numbers on the first circle suggest that the Everlasting Theory should contain ten parameters. We can reduce the number of parameters to seven because we can ignore the mass density of three fields. The ten prime numbers also suggest that the phase space of a closed string should contain ten elements. The radii starting with the prime numbers 2 and 3 do not contain other prime numbers. This suggests that two parameters from the seven parameters cannot change with time (in a cosmic scale). Such two parameters are absolute parameters. They are the mass density of the structureless tachyons and the dynamic viscosity that leads to the closed strings always having half-integral spin and an identical radius. The prime numbers 2 and 3 are also associated with the internal structure of each microquasar and with the tori arising in the phase transitions of the Newtonian spacetime. Each microquasar emits two tones and the ratio of their frequencies is 2:3. This is associated with the ratio of the lengths of the circular axis and the equator in a dense cosmic object – it is 2:3. Also in existence are only one series of prime numbers (prime numbers = 5+d·6, where d=0, 1, 2, 4, 8, 16, 32, 64, and 128) which leads to the Titius-Bode law. We obtain the Titius-Bode law by applying the following gauge symmetry R(AU) = A + d·B = (5·2/3 + 5 + d·6)/20.34 = 0.41 + d·0.295 i.e. A/B = 1.39. We know that the numbers 8 (eight rays containing prime numbers) and 24 (each circle of the prime numbers cross contains twenty-four succeeding natural numbers) are characteristic for the Ramanujan modular equations. The Titius-Bode laws for strong gravitational interactions and strong interactions respectively lead to three symmetrical decays (there are the three succeeding prime numbers: 1, 2, 3) and eight symmetrical decays (there are 8 rays). This suggests that these laws are indirectly associated with the prime numbers. There are also eight different binary systems of neutrinos with rotating spin. It is possible that prime numbers are associated with probable exclusion principles because the states that result from selection rules are as unique as the prime numbers. Summary In this chapter, I have described how to unify particle physics with the theory of chaos via a single field. In the Einstein spacetime theory, carrying electromagnetic interactions are possible in different arrangements of the dipoles. The divergent or convergent Ketterle type 86 arrangements of the spins of the weak dipoles lead to particle physics whereas the single file arrangement of the spins of the dipoles leads to the fractal geometry. I have also explained the physical meaning of the complex number. Complex numbers lead to physical reality, the Pascal triangle leads to the Titius-Bode law and the Titius-Bode law is associated with fractal geometry i.e. with travelling half-distances, with the distribution of the sources of interactions and with the creation of smaller and smaller self-similar physical objects due to symmetrical decays (the bifurcation). Fractals appearing in the fractal field can most probably modify genetic codes very slowly. The grouping of the natural numbers in the twenty-four infinite sets leads to many physicomathematical relations. Most important are the numbers 2, 8 and 24. The number 2 represents the rotation of spin, 8 represents the carriers of gluons and photons whereas the phase space of the non-rotating-spin neutrino or binary system of neutrinos contains 24 elements. We can see that these three numbers are associated with the ground state of the Einstein spacetime and its excitations. References [1] M W Zwierlein, J R Abo-Shaeer, A Schirotzek, C H Schunck and W Ketterle; Vortices and superfluidity in a strongly interacting Fermi gas; Nature 435, 1047-1051 (2005). [2] E W Weisstein; Chaos Game; MathWorld. [3] E W Weisstein; Pascal’s triangle; MathWorld. [4] P Plichta; God's Secret Formula: Deciphering the Riddle of the Universe and the Prime Number Code. 87 New Big Bang Theory Theory of tachyons The Special Theory of Relativity leads to conclusion that no particle can accelerate from subluminal speed to superluminal speed but symmetry that is characteristic of the energymomentum relation E = p 2 c 2 + m 2 c 4 (201) applied in this theory permits to exist particles all the time moving with superluminal speed (which I refer to as tachyons) and which have a real (i.e. positive) inertial mass. My interpretation of this solution of this Einstein equation is as follows. The superluminal speeds cause that denominator in the energy equation E = mc 2 /sqrt(1 - v 2 /c 2 ) (202) is imaginary so we can multiply the mass and speeds by the imaginary unit i, where i 2 = -1. The solution shows that energy of a tachyon decreases when linear speed increases E = mc 2 /sqrt(v 2 /c 2 - 1). (203) Because the mean speed of tachyons is 8·10 88 times higher than the speed of light in ‘vacuum’ (such value leads to the physical constants) then in approximation the energy of tachyon is inversely proportional to its speed E(v >> c) = mc 3 /v. (204) Such phenomenon is possible only if with increasing speed of a tachyon its mass decreases. This is possible due to the direct collisions of the tachyons. But when size of a tachyon decreases then area of contact in the direct collisions is smaller and smaller and for some strictly determined size the grinding of a tachyon ends. The mass of a tachyon does not increase when it accelerates because the tachyons are moving in the truly empty volume. This leads to the conclusion that the faster-than-light particles cannot move through a field/spacetime but rather with field/spacetime. So wee can assume that the fundamental spacetime consists of the tachyons placed in truly empty volume. Supertachyon Speed of a tachyon should be zero for infinite cross-section of it whereas should be infinite for sizeless tachyon so we obtain v = a/r 2 , (205) where a=0.540031·10 -31 m 3 /s for mean tachyon in the Newtonian spacetime. Mass is directly proportional to volume of tachyon m = b1·4πr 3 /3 = br 3 , (206) where b=3.485879·10 86 kg/m 3 for mean tachyon in the Newtonian spacetime. Due to the flows (in cosmic scale) of finite regions of the Newtonian spacetime, their condensation is possible. Formulae (204)-(206) lead to following formula for a condensation E = dr 5 , (207) where d=1.739225·10 143 J/m 5 . Because the free tachyons have broken contact with the rest of nature and because practically all binary systems of closed strings are bound inside neutrinos so the Newtonian spacetime does not act similarly as the Einstein spacetime i.e. the spin energy of the tachyons, closed strings and neutrinos cannot be converted into mass. This causes that the Planck critical density and the critical mass are not associated with a condensate in the Newtonian spacetime. We can calculate radius and mass of a hypothetical supertachyon which mass density is equal to the Planck critical density c 5 /(hG 2 )=5.1553·10 96 kg/m 3 . This definition is for a cubic meter so we obtain c 5 /(hG 2 ) = E/(c 2 L 3 ) = dL 5 /(c 2 L 3 ) = dL 2 /c 2 , (208) 88 where L is the side of the cube. The linear speed of such supertachyon is almost equal to zero so the definition M/L 3 for the mass density is obligatory. From formula (208) we obtain L = sqrt(c 7 /(dhG 2 )) = 1.632189·10 -15 m. (209) Radius R of the supertachyon is R = L/(4π/3) 1/3 = 1.012529·10 -15 m. (210) Mass M of the supertachyon is M = 4πc 5 R 3 /(3hG 2 ) = 2.2415·10 52 kg. (211) In reality, because the tachyons have the maximum mass density then a condensate of tachyons having mass equal to M should have radius about 4·10 -12 m. Of course, the Planck density should have a physical meaning. We can calculate the mean energy density (not the mean mass density) frozen inside the binary systems of neutrinos a protoworld consists of. The virtual particles most of all arise on the circular axis of the big torus and their speeds are equal to the speed of light in the Einstein spacetime. This leads to conclusion that a hypothetical radius of the Schwarzschild surface for such particles RS is two times greater than radius of the circular axis and is RS=3.616·10 24 m. Mass of the object is Mo=1.961·10 52 kg. Energy frozen inside the binary systems of neutrinos is v 2 /c 2 =(2.4248·10 59 ) 2 times greater than the M. This leads to the mean energy density inside the sphere which has the radius equal to the hypothetical Schwarzschild-surface radius (for the virtual particles produced on the circular axis of the big torus) equal to 3Mov 2 /(4πRS 3 c 2 )=5.8·10 96 kg/m 3 . In approximation, we obtained the Planck density. We can say that in approximation the evolution of the protoworlds begin from the Planck critical energy density. The same mass density we obtain for the geometric mean of the Einstein mass of a neutrino (mneutrino) and Newtonian energy of a neutrino (i.e. the energy of the faster-thanlight closed strings a neutrino consists of mneutrinov 2 /c 2 ) inside sphere that has radius two times greater than the circular axis of the weak charge of neutrino. The geometric mean mass/energy is mneutrinov/c=8.1·10 -8 kg whereas the geometric mean density 5.8·10 96 kg/m 3 . The definition of the mass density shows that we obtain the same mass density dividing the mass and volume by the same factor. To obtain the Planck mass and length, the factor must be approximately Fx=3.7. It is the ratio of the masses of neutral kaon and neutral pion). I must emphasize that most important to create particles or cosmic objects (such as, for example, stars) is mass density, not mass or volume. This means that first of all the Planck density should have a physical meaning. Due to the inflation of a supertachyon there appear the binary systems of the closed strings and next the binary systems of the neutrinos. Due to the spin of a supertachyon as a whole and the infinitesimally small spin of the tachyons, the supertachyons have internal helicity. It is also uncharged. This means that finally there only neutrons or only antineutrons appear. The mass needed to create the Protoworld (i.e. after the period of inflation) and the cosmic loop (i.e. the early universe) is 2.1431·10 52 kg plus the emitted binding energy (about 2.06 % of this mass). The needed total mass is 2.1835·10 52 kg. We can see that the surplus mass of the supertachyon is only 2.7 %. Eras in the New Big Bang Theory During a collapse of a region of the Newtonian spacetime pressure increases so also speed of tachyons. This means that mean radius of tachyons decreases. When such supertachyon expands in the surrounding Newtonian spacetime composed of slower tachyons, there arises shock wave that can create a cosmic bulb composed of pieces of space packed to maximum. Inside such cosmic bulb, the initial parameters cannot change unless there can arise new supertachyons. In different cosmic bulbs, the initial four of six parameters can have different values. 89 The maximum mass density of a condensate of tachyons is about 8.3·10 85 kg/m 3 . In the Newtonian spacetime can appear condensates that have different sizes. To create the Protoworld and the cosmic loop, the minimum radius of a condensate of tachyons should be about 4·10 -12 m. The eras for such hypothetical condensate are as follows. In reality, besides the Protoworld and the cosmic loop there must be created the two spacetimes so the mass of the tachyonic condensate must be much, much greater than the hypothetical supertachyon. The era of the binary systems of the closed strings production: The binary systems of closed strings arise on the surface of the condensate. Due to the size of the condensate and the speed of tachyons this era lasted about 10 -109 s. The era of the binary systems of the neutrinos production: From the new theory of the weak interactions, we know that minimum distance between neutrinos is 2π times (sometimes 2π/3) greater than the radius of the equator of a neutrino. This leads to following maximum mass density of a volume filled with neutrinos 10 36 kg/m 3 . This means that volume of the approximately a size of a tropical cyclone). Due to the superluminal speeds of the binary systems of the closed strings this era lasted about 3·10 -63 s. Because the neutrinos produce gradients in the Newtonian spacetime, so their production stops the inflation. The era of the neutrons production: Minimum distance between neutrons in the neutron stars is about 2 fm. This leads to following maximum mass density of a volume filled with neutrons 2·10 17 kg/m 3 . This means that volume of the condensate increases about 4·10 68 times orbit). Due to the speeds of the binary systems of the neutrinos, this era lasted about 600 s. Next the biggest neutron stars appeared. The era of the protoworlds and the early universes formation lasted at least about 300 million years. The era of the cosmic loop (i.e. the early Universe) evolution began about 21 billion years ago. Due to the Protoworldneutrino transition, there appeared the dark energy and the four inflows of it into the cosmic loop, i.e. into the early Universe, what started the expansion of the early Universe. The rotary vortices composed of the binary systems of neutrinos can arise directly in the Einstein spacetime. Their evolution I described in Paragraph titled “Broken symmetry” in Chapter “Interactions”. We can ask following question. Are in the Newtonian spacetime some regions defined by different initial parameters? In different regions, values of five between the seven parameters could be different. There are only two absolute parameters i.e. the inertial mass density of the tachyons (which ties mass with radius) and dynamic viscosity. In overlapping parts of different regions grinding of the tachyons takes place. We can calculate the lower limit for size of our region in absence of cosmic bulb. The Universe exists about 21 billion years (i.e. about 7·10 17 seconds) and tachyons are moving with mean linear speed 2.4·10 97 meters per second. This leads to the lower limit of the size equal to 3·10 115 meters. This is a vast volume but we know that the truly empty volume is infinite. The second solution leads to a cosmic bulb. Then, size of the cosmic bulb can be smaller. 90 Reformulated Quantum Chromodynamics The QCD is the theory of interactions then in this theory appear the distances of mass characteristic for the atom-like structure of baryons such as the mass of the up quark 2.23 MeV, down quark 4.89 MeV or strange quark 106 MeV. Moreover, due to the strong interactions described within the atom-like structure of baryons there appear particles carrying masses the same as the three heaviest quarks i.e. 1267 MeV, 4190 MeV and 171.8 GeV. The QCD does not lead to the very stable atom-like structure of the baryons. Within the reformulated QCD, we can derive the masses characteristic for the QCD from the atom-like structure of baryons. Experimental data lead to the atom-like structure of baryons. The phase transitions of the Newtonian spacetime and symmetrical decays of virtual bosons also lead to the atom-like structure of baryons. In the core is torus which shape leads to the gluon loops which radii are 1A/3 and 2A/3, where A denotes the radius of the equator of the torus. The elementary electric charge carried by the torus arises from gluon loop which radius is A. The quarks in the QCD carry the fractional electric charges equal to ±1Q/3 and ±2Q/3 (in the reformulated QCD the signs of the charges depend on the spin polarization of the surfaces of the torielectric-charges). Then, assume that the sham quark-antiquark pairs arise from binary systems of the gluon loops when they overlap with the characteristic orbits in baryons. Assume also that the linear mass densities of all gluon loops are the same. Then, mass and electric charge of the sham quarks are in proportion to radii of the gluon loops. There are six different basic sham quarks. Two of them are associated with the shape of the torus inside core whereas the next four are associated with the four Titius-Bode orbits for the strong interactions. Due to the value of the sum of mass of the core of baryons and the relativistic pion under the Schwarzschild surface for the strong interactions, there are only four orbits. There are in existence the six basic sham quarks for which the gluon loops have following radii: 1A/3, 2A/3, A, A+B, A+2B and A+4B. But there are many other sham quarks when particles interact. The charges and mass of the six basic sham quarks are as follows. First: ±1Q/3 and 242.5 MeV Second: ±2Q/3 and 485 MeV Third: ±1Q and 727.4 MeV Fourth: ±1.72Q and 1251 MeV Fifth: ±2.43Q and 1767 MeV Sixth: ±3.9Q and 2821 MeV We can see that the first and second sham quarks have the expected electric charges whereas the fourth has expected mass. The sham quarks are not a point particles but they consist of the almost point binary systems of neutrinos which are the Feynman partons. The sham quarks have only one colour, not three as the quarks. The colour of sham quarks is associated with their internal helicity. The sham quark-antiquark pairs are colourless. This shows that it is not enough to call the sham quarks the quarks. In reality, due to the gluon condensates produced in collisions there arise other gluon loops and next the sham quarkantiquark pairs. The ground state of the Einstein spacetime consists of the non-rotating-spin binary systems of neutrinos. They can carry the rotational energies, i.e. the photons and gluons, so photons and gluons are the massless particles (they are the rotational energies i.e. the excitations of the Einstein spacetime). Each rotating binary system of neutrinos has three internal helicities so the carriers of gluons and photons are the 3-coloured particles. The number of different neutrinos and the three internal helicities lead to 8 different carriers of the photons and gluons. Outside strong fields, the internal helicity of the Einstein spacetime is equal to zero so to describe electromagnetism we can neglect the internal structure of the carriers. Due to the 91 internal helicity of the core of baryons, the strong fields have internal helicities not equal to zero so there are the 8 different gluons. The relativistic W pion in the d=2 state (its relativistic mass is 175.709 MeV) is responsible for the strangeness of particles. Due to the four Titius-Bode orbits for the strong interactions, the length of the large loops (their radius is 2A/3) and due to the helicity of the core of baryons and the strong field, there is the illusion of the confinement of gluons and sham quarks for low and high energies. The essential part of the curve R(s) = f(sqrt(s)) for electron-positron collisions The sham quarks appear as gluon loops which linear mass density is the same as the loop from which the torus inside the core of the baryons arises. Next, they transform into the baryonic-core-like sham quark-antiquark pairs. This means that mass and electric charge of a sham quark is in proportion to radius of gluon loop (msham-quark ~ Qsham-quark ~ Rgluon-loop). For R=A we have Qsham-quark = ±1Q, where -1Q is the electric charge of antiproton. Describe following curve [1]: R(s) = σ(e + e - hadrons,s)/σ(e + e - μ + μ - ,s) = ΣQi 2 , (212) where summation concerns the electric charge of the core of proton (+1Q) and electric charges of all different sham quark-antiquark pairs produced in the collisions. For low energies, due to the shape of the torus and the ternary symmetry for the electric/strong charges (see Chapter “General Relativity in Reformulated Quantum Chromodynamics and New Cosmology”), inside the core of baryons, there are following electric charges: ±2Q/3, ±1Q/3 and +1Q so we obtain R(s) = 2.1. For production of the core-anticore pairs too, i.e. there are following charges: ±2Q/3, +1Q and ±1Q, is R(s) = 3.9. The gluon loop overlapping with the d=1 Titius-Bode orbit for the strong interactions leads to the charges of sham quarks ±1.72Q and to their mass 1251 MeV (is it the charm sham quark?). When in the collisions appear the charm sham quark-antiquark pairs too we obtain R(s) = 8.9 (+1Q, ±1Q, ±1.72Q). Particles production (i.e. the numerous different loops production when state is broadening) increases value of the R. The essential part of the curve R(s) = f(sqrt(s)) is associated with the atom-like structure of baryons and the sham quark-antiquark pairs production. How to define the essential part for the sham quark-antiquark pairs production? The Everlasting Theory shows that the numbers 10 and 26, which appear in the string/M theory, do not define higher dimensions but the numbers of elements in the phase spaces of a loop (10) and neutrino (26, fermion) or binary system of neutrinos (26, boson). Such is origin of the fermion-boson symmetry. We can treat the elements in the phase space of a loop (the 10 elements) as the degrees of freedom. This means that the hypervolume of the phase space and its total mass (the mass is in proportion to the hypervolume), i.e. the mass of the sham quark-antiquark pairs created in electron-positron collisions, must be in proportion to the radius of gluon loop to the power of 10 so also to the ratio R(s) to the power of 5. In the electron-positron collisions, the gluon loops arise as the binary systems of the binary systems of the gluon loops i.e. as the quadrupoles. Lightest binary-system meson, which consists of four gluon loops, is the kaon K. The electron-positron-pairfour-gluon-loops(quadrupole) transition looks as an analog to the decay of neutral kaon (there are two opposite electric charges) to charged kaon (there is quadrupole of gluon loops). In each neutral-kaonpositive-kaon decay, is emitted energy approximately 4.032 MeV. Calculate the thresholds for sqrt(s) [GeV] from following formula sqrt(s)[GeV] = (mkaon(o) - mkaon(+-))[MeV]R(s) 5 /1000 (213) For R(s) = 2.1 we obtain sqrt(s) = 0.16 GeV. Baryons arise as the baryon-antibaryon pairs. This means that to create two the lightest sham quark-antiquark pairs, the minimum value for the essential part should be sqrt(s)minimum = 0.97 GeV ≈ 1 GeV. For R(s) = 3.9 we obtain sqrt(s) = 3.6 GeV. For R(s) = 8.9 we obtain sqrt(s) = 227 GeV. 92 But there is the broadening of the central mass (see the explanations below formula (216)) starting from 3 GeV for R(s) = 3.9 and 191 GeV for R(s) = 8.9. The additional part of the curve R(s) = f(sqrt(s)) Mass of created particles M we can calculate from formula similar to (213): M[GeV] = sqrt(s)[GeV] = a(rrange[fm] + A[fm]) 10 , (214) where rrange denotes range of particle/gluon-condensate created on equator of the torus in core of baryons whereas A = 0.6974425 fm is the radius of the equator of the torus in the core. What is physical meaning of this formula? On the equator of the torus, arise the gluon condensates which masses are the same as the calculated within the atom-like structure of baryons. Knowing that range of mass equal to mS(+,-),d=4 = 187.573 MeV is 4B = 2.00736 fm, we can calculate range for a gluon condensate from formula rrange[fm] = mS(+,-),d=4[MeV]4B[fm]/mcondensate[MeV], (215) where mcondensate is the mass of gluon condensate. Since for M = 0.72744 GeV we should obtain rloop = rrange + A = A, then a = 1/2αw(proton), where αw(proton) = 0.0187229 is the coupling constant for the weak interactions of baryons. We can rewrite the formula (214) as follows M[GeV] = sqrt(s)[GeV] = (rloop[fm]) 10 /(2αw(proton)) (216) The gluon condensates are the regions with thickened Einstein spacetime so they are the carriers of the weak interactions. In generally, the particles arise when total length of the loops is equal to the length of the two electron loops (there collide the electron and positron) or two muon loops. The electron loop has length 554.3A whereas the muon loop has length 2.68A. We can see that gluon condensates carrying greater mass (due to higher energy of collisions) produce lighter particles. This is the reason why in the last LHC experiments for very high energies the number of produced pions and kaons was greater than expected [2]. This means also that for higher and higher energies of collisions, there are weaker and weaker signals that there is in existence the atom-like structure of baryons. Just for higher and higher energies, more and more baryons have destroyed the Titius-Bode orbits for the strong interactions. To ‘see’ the atom-like structure we should analyse the weak signals for the medium energies of collisions i.e. close but below about 1 TeV. Gluon condensates carrying mass following from the atom-like structure of baryons can create new particles. There should be a weak signals of existence of the type Z o particles for the d states. There arise gluon balls which have mass equal to the mass distance between the charged and neutral relativistic pions in the d states multiplied by the Xw = 19,685.3 (see formula (57)). Their mass should be 105 GeV for the last state for the strong-weak interactions, 118 GeV for the ground state above the Schwarzschild surface for the strong interactions and 140 GeV for the ground state (see formula (219)). These mass follow from the atom-like structure of baryons. Such gluon balls arise in centre of the baryons and decays between the equator of the torus (radius = A) and the sphere between the strong and electromagnetic fields (radius of the last d=4 orbit is 2.7 fm whereas the range of the strong field is 2.9 fm so the mean value is 2.8 fm). The mean value for the lifetime or mass we obtain for the Schwarzschild surface for the strong interactions (radius = 1.4 fm). We know that lifetime is inversely proportional to range. This means that maximum lifetime to the central value is 2. On the other hand, lifetime is inversely proportional to four powers of mass (see formula (89)). This means that to calculate the broadening of the central mass, we must multiply and divide the central mass by 2 1/4 = 1.1892. Respectively, the broadenings of mass are as follows: the (88, 125) GeV for the 105 GeV, (99, 140) for 118 GeV and (118, 166) for 140 GeV. For the mean central mass (105 + 118 + 140)/3 = 121 GeV, the final broadening is (88, 166) GeV. Similar data experimentalists obtained in the SLD (SLAC Large Detector) experiment [3]. In the high-energy regime, the Titius-Bode orbits are in great part destroyed so there dominate the phenomena on the 93 Schwarzschild surface. For this surface (radius is 2A) we obtain 128 GeV and it is in approximation the mean value for the interval (88, 166) GeV i.e. (88 + 166)/2 = 127 GeV. The values 105 GeV, 118 GeV, 140 GeV and especially the 127-128 GeV are most important in the latest LHC-experiment data [4]. Due to the interactions of the core of baryons with bosons, we observe the mass broadening for the Z o boson. Calculate mass of particle produced by gluon condensate carrying mass equal to the sum of mass of the core of baryons (727.44 MeV) and charged pion (139.57 MeV). The total mass is 867 MeV. Calculated mass of the particle is 92.0 GeV and it is the Z o boson. The broadening is from 77 GeV to 109 GeV but the ends of this interval are broadening by the virtual mass of the core i.e. 727.44 MeV and virtual mass of nucleon i.e. in approximation 939 MeV. For such condensates R(s) = 3.9 whereas the sqrt(s) respectively are 188 GeV and 68 GeV. The broadening of the 188 GeV is (158, 223) whereas of the 68 GeV is (57, 81). These two signals should be weak, i.e. the R(s) should be much lower than for the Z boson. The three intervals overlap partially or are tangent. The sum of the three intervals is (57, 223) GeV what is consistent with experimental data. For the maximum of the R(s), there arise about 683 gluon loops and each sham quark has electric charge equal to ±1.623Q. This means that the maximum for the R(s) should be in approximation 1800. For collision of two electron-positron pairs (the quadrupole), we obtain R(s) ≈ 3600. The mass of the Z o boson we can calculate also from following formula (mpion(+-) mpion(o),freeXw = 90.4 GeV, (217) where Xw = w(proton)/w(electron-muon) = 19,685.3. This boson can decay into hadron jets. Comparing the formulae (213) and (217) shows that the Z o is not a part of the essential part of the curve R(s) = f(sqrt(s)) whereas the W +- boson could be (mkaon(0) mkaon(+-)Xw = 79.4 GeV, (218) but it is only an illusion. We should observe a weak peak in the data for mass equal to the distance of the relativistic masses between the relativistic pions in the d = 1 state (it is 7.11 MeV) multiplied by the Xw (mW(+-),d=1 mW(o),d=1Xw = 140 GeV. (219) Particle carrying such mass I will refer to as Zrel. The obtained theoretical result is consistent with the last data [5]. We can see that the Zrel particle is the type Z o particle so it decays into hadron jets. The Zrel particles arise also due to the transition of gluon balls or loops carrying mass equal to 780 MeV – in approximation, it is mass of the ω meson (its mass is 782 MeV). Calculate mass of a particle produced by gluon condensate carrying mass equal to the mass of the Φ3(1850) meson (m = 1854 ± 7 MeV [6]). Calculated mass for mass equal to 1847 MeV is 9.45 GeV. This mass is close to the mass of the Y(1S, 9460 [6]). There are 863 loops and each sham quark carries electric charge equal to ±1.289Q. This leads to R(s) ≈ 1440. For collision of two electron-positron pairs (the quadrupole) is R(s) ≈ 2880. The mass of the π(1800) meson (m = 1816 ± 14 MeV [6]), i.e. the value 1813 MeV, leads to the χb0(1P) meson (m = 9859 MeV [6]). Masses of quarks applied in the QCD Masses of quarks applied in the QCD we can calculate within the reformulated QCD that follows from the atom-like structure of baryons. Mass of the up quark (it is the up sham quark because its electric charge is different) is equal to the half of the distance of masses between the two states of proton (2.23 MeV). Mass of the down sham quark is equal to the half of the distance of masses between the two states of neutron (4.89 MeV). Mass of the strange sham quark (106 MeV) is equal to the distance of masses between the point mass in the core of baryons (in approximation Y = 424 MeV = 4·106 MeV) and the torus in the core of baryons (in approximation X = 318 MeV = 3·106 MeV). Moreover, the 94 mean relativistic mass of the relativistic pions in the d = 2 state is in approximation R = 212 MeV = 2·106 MeV. Ratio of (X + Y)/R and X/Y is in approximation 14/3 = 4.667. The exact calculations lead to 4.66913…. Obtained result is close to the Feigenbaum constant δ = 4.66920… We can see that the mass of the strange sham quark is indirectly associated with the Feigenbaum universality. The point mass Y is responsible for the weak interactions of baryons so the Y = 4·106 MeV leads to the quadrupole symmetry for the weak interactions so also to the bi-dipoles of neutrinos (spin = 2) responsible for emission and absorption of gravitational energy/mass. The internal structure of torus and the mass R are responsible for the strong interactions. This means that the X = 3·106 MeV leads to the ternary symmetry for the strong interactions of the torus (i.e. the core of baryons plus a particle-antiparticle pair) whereas the R = 2·106 MeV leads to the binary symmetry for the strong interactions (i.e. to particle-antiparticle pairs so to some mesons as well). There are also the 3 different electric charges associated with the torus in the core of baryons: 1Q/3, 2Q/3 and Q. This is the ternary symmetry for the electromagnetic interactions. Applying formulae (215) and (216), we can calculate the masses of the three heaviest quarks. Mass of gluon condensate equal to mass of the Υ(1S, 9460 MeV) leads to the mass of the charm sham quark M = 1267 MeV. Mass of the bare electron-positron pair is 4 times greater than the circular mass of electron, i.e. than the mass of electric charge of electron. Some analog composed of the strong charges/masses (its mass is 4X) has mass close to the mass of the charm quark as well and is 1273 MeV. Mass of gluon condensate equal to mass of the sixth basic sham quark, i.e. mcondensate = 2821 MeV, leads to the mass of the bottom sham quark M = 4190 MeV. The sixth basic sham quark is the valence quark so we can treat the bottom sham quark as the valence quark also. It is the reason why the calculations of the running coupling for the strong interactions via the bottom sham quark are simplest [7]. Mass of gluon condensate equal to sum of masses of the torus inside the core of baryons (X=318.2955 MeV) and the point mass (Y=424.1245 MeV), i.e. mcondensate = 742.42 MeV, leads to the mass of the top sham quark M = 171.8 GeV. When there appear the charm sham quark-antiquark pairs (m = 2·1267 ≈ 2.5 GeV) that carry the electric charges ±1.72 Q, then there is forced the quadrupole symmetry for the electric charges (±1Q/3, ±2Q/3, ±1Q and ±1.72Q i.e. four different charge states) characteristic for the weak interactions. Strong mass of virtual particles produced by a pair is 2αSm so weak mass is 2αWαSm. This means that the running coupling for the strong-weak interactions is αSW = 2αWαS (see formula (79)). For example, the weak mass of virtual particles produced by the strong mass of the K kaon is in approximation equal to the mass of pion. This means that for energy about 7.1 GeV there appear the ‘horns’. This is consistent with the experimental data. We can see also how the mass of a charm sham quark-antiquark mass defines the energy E1 for the transition from the strong to the strong-weak interactions. The energy E1 is about 2/3 times lower than the mass 2.5 GeV. To see the transition, the collision energy must be higher than the 2.5 GeV. Similarity of interactions of different sham quarks follows from the fact that their masses and electric charges are in proportion to radii of their equators. We see that mass of particles follow from the atom-like structure of baryons. Particles can arise also due to the decays of the gluon condensates. Origin of limitations in non-reformulated QCD Here I explain the origin of the limitations in the asymptotic freedom described within the mainstream perturbative QCD. The limitations follow from the fact that we neglected the atom-like structure of baryons. The perturbative QCD does not lead to correct results for the asymptotic freedom in the low energy regime, there appears the mass scale 5 GeV and the 95 free parameter about 217 MeV. The non-perturbative Everlasting Theory is valid in whole spectrum of energy. At first, I will point the important things concerning the asymptotic freedom in the mainstream QCD. 1. In the Lagrangian appear the dimensionless coupling constants. We can change one of them (i.e. the integration constant) on the free dimensional parameter i.e. the QCD scale i.e. the lambda parameter. Its central value is 217 MeV. The lambda parameter sets the scale at which the alpha_strong becomes large i.e. below this mass/energy we cannot apply the perturbative QCD. We must apply some non-perturbative theory. 2. In the perturbative QCD appears the mass scale which is chosen arbitrary. The asymptotic freedom is for mass scale about 5 GeV i.e. greater than the mass of the bottom quark and much smaller than the mass of the top quark i.e. about 172 GeV. 3. The QCD is asymptotically free thus for large energy we can use perturbative theory safely. 4. In perturbative QCD absolute value of alpha_strong has to be obtained from experiment. Today the fundamental parameter is the alpha_strong for the mass of the Z boson (91.19 GeV). The experimental data for the mass of the Z boson are as follows (see hep-ex/0407021, (2004)): Alpha_strong(mass of Z boson) = 0.1182 ± 0.0027. The non-reformulated perturbative QCD gives 0.118 ± 0.006 (see S. Weinberg book “Quantum Theory of Fields”, Volume II, (1996)). What says the Everlasting Theory about origin of the limitations concerning the asymptotic freedom described within the non-reformulated QCD? 1. The reformulated QCD shows (see formulae (214)-(216)) that there appears the energy 3.3 GeV which is the lowest limit of energy of collision above which produced gluon-balls, which are responsible for the strong-weak interactions in the perturbative regime (i.e. in the nonreformulated QCD), have mass lower than the lowest limit. We can see that in the nonreformulated QCD must appear the mass scale but why the applied mass scale 5 GeV (this mass is greater than the mass of the bottom quark about 4.2 GeV) is higher than the 3.3 GeV (this mass is greater than the mass of the charm quark)? It is because for mass scale 3.3 GeV we cannot neglect the mass of the bottom quark (4.2 GeV>3.3 GeV) and then we obtain incorrect theoretical results. We can see also that the law of conservation of energy is obligatory for energies higher than 3.3 GeV i.e., then, mass/energy of produced gluon-ball(s) is lower than energy of collision. In this point, we should add that from the formula (216) follows that when energy of collision increases then masses of the created gluon-balls are smaller and smaller i.e. the alpha_strong decreases. It leads to the asymptotic freedom in the perturbative QCD. It is also the reason that we detect much more the pions and kaons than it was expected in the high-energy regime. In the perturbative QCD the mass scale 5 GeV is above the threshold 3.3 GeV so we should obtain correct results but there is needed one additional free parameter which will eliminate the great values of the alpha_strong. 2. The asymptotic freedom within the Everlasting Theory follows from the law of conservation of spin, atom-like structure of baryons and Uncertainty Principle. When energy increases then mass of the carriers of the strong interactions decreases. This follows from the coupling of the core of baryons with the Einstein spacetime. It is obligatory for the whole spectrum of 96 energies and for the mass of the Z boson we obtain from formulae (81), (83) and (86) following result: Alpha_strong(mass of Z boson) = 0.1176 ± 0.0005. This value is consistent with experimental result. 3. What is physical meaning of the lambda parameter 217 MeV (+25, -23)? The atom-like structure of baryons shows that in the d = 1 state, which lies under the Schwarzschild surface for the strong interactions, there can be the relativistic neutral pion which mass is about 209 MeV or relativistic charged pion which mass is about 216 MeV (see Table 1, page 18). Both masses are consistent with value of the lambda parameter. This shows that for energies below the lambda parameter, we must apply the non-perturbative Everlasting Theory because we cannot neglect the relativistic masses of the pions which are the carriers of the strong interactions as well. 4. There should be differences for the alpha_strong for very high energies. For example, for energy 2.76 TeV, I obtained alpha_strong = 0.114. Recapitulation I showed the close relations between the perturbative QCD and the Everlasting Theory. The Everlasting Theory, especially the atom-like structure of baryons, shows the origin of the limitations in the perturbative QCD. The described limitations follow from the fact that the perturbative asymptotic freedom fully neglects the internal structure of the core of baryons. The Everlasting Theory, which is the more fundamental theory than the perturbative QCD, leads to origin of the limitations in the perturbative QCD. Due to the mass scale, the law of conservation of energy is valid and we can neglect the masses of the quarks bottom, charm, strange, down and up. The free lambda parameter appears to eliminate the great values of the alpha_strong. The lambda parameter is associated with the d = 1 state which appears in the atom-like structure of baryons. The last data (2011) lead indirectly to the core of baryons as well. The Everlasting Theory leads to the charged core. Its mass is 727.44 MeV. Such core produces virtual gluons which masses are ±727.44 MeV. On the other hand, in following paper: J. Phys. G: Nucl. Part. Phys. 38 (2011) 045003 (17pp), O. Oliveira and P. Bicudo find that “the infrared data (low energy) can be associated with a constant gluon mass of 723(11) MeV, if one excludes the zero momentum gluon propagator from the analysis.” This means that the infrared data lead indirectly to the core of baryons. Summary Due to the atom-like structure of baryons, we should reformulate the QCD. There appear the 6 basic sham quarks and 8 gluons. The gluon-loopsbasic-sham-quarks transitions lead to the essential part of the curve R(s) = f(sqrt(s)). The particlesgluon-condensatesnewparticles transitions cause the particles transform into new particles. The new particles are the additional part of the curve R(s) = f(sqrt(s)). The atom-like structure of baryons, internal structure of the kaons K and their decays in strong fields are most important to understand the phenomena associated with the high-energy collisions of particles. The reformulated QCD contains six parameters only. The calculated mass of the top quark (171.8 GeV) is associated with the edge of the core of baryons whereas the calculated mass of the bottom quark (4190 MeV) is associated with the edge of the strong field. There should be in existence the next two flavours of the quarks associated with the state d = 1 (26.3 GeV) and state d = 2 (10.46 GeV) but probably the “edges” concerning these orbits/tunnels are too small to detect distinct signals. 98 References [1] http://pdg.lbl.gov/current/xsect; K. Nakamura et al. (Particle Data Group), J. Phys. G 37, 075021 (2010) [2] The CMS Collaboration; Transverse-momentum and pseudorapidity distribution of charged hadrons in pp collisions at sqrt(s) = 0.9 and 2.36 TeV; arXiv: 1002.0621v2 [hep-ex] 8 Feb 2010 [3] http://vixra.files.wordpress.com/2011/08/gfitvars.jpg [4] http://www.atlas.ch/news/2011/figure-combo2.html [5] http://blois.in2p3.fr/2011/transparencies/punzi.pdf [6] K. Nakamura et al. (Particle Data Group), J. Phys. G 37, 075021 (2010) [7] D.J. Gross, F. Wilczek (1973). "Ultraviolet behavior of non-abelian gauge theories". Physical Review Letters 30 (26): 1343–1346. Bibcode 1973PhRvL..30.1343G. doi: 10.1103/PhysRevLett.30.1343 99 Proton and Loops as Foundations of Theory of Chaos This theory leads to the atom-like structure of proton. There is the core and outside it is in force the Titius-Bode law for the strong interactions. The binary systems of neutrinos, i.e. the Einstein spacetime components, carry the massless gluons and photons. Strong field has internal helicity – this causes that properties of gluons depend on internal structure of binary systems of neutrinos because each such system has three different internal helicities. There are eight different gluons. We can neglect the internal helicities outside strong fields. This means that outside strong fields the gluons behave as photons. Proton consists of additional Einstein spacetime components. In collisions of baryons, there arise gluon loops that outside the strong field behave as photon loops or photons. Some ratio of the masses of the proton components leads to the Feigenbaum constant. The internal structure of proton, via some structures built up of the Einstein spacetime components, leaks outside it. This leads to the Feigenbaum universality. The atom-like structure of proton leads to bifurcation also. Concentric loops composed of the Einstein spacetime components arise due to the weak interactions and lead to the Mandelbrot-like sets. The structure of bare particles and binding energies associated with this structure cause that elimination of renormalization is possible. This is possible because the internal structure of proton codes the Feigenbaum constant applied in the renormalization group theory. Properties of the two spacetimes show that trajectories of the quantum particles in the Einstein spacetime have no sense. New particle theory has much more the ‘tangent points’ with the theory of chaos than the SM. Theory of chaos leads to the correct structure of baryons. The Everlasting Theory and some experimental data lead to the atom-like structure of baryons, six basic sham quarks and eight gluons. The phase transitions of the fundamental spacetime lead to the core of baryons that consists of the torus and point mass in its centre. The point mass (Y = 424.1245 MeV) is responsible for the weak interactions of baryons whereas on the circular axis inside the torus arise the large loops (mass = 67.5444 MeV) responsible for the strong interactions. Symmetrical decays of virtual bosons cause that outside the core of baryons is obligatory the Titius-Bode law for the strong interactions. The equator of the sixth basic sham quark overlaps with the last orbit for the strong interactions. I will try to show that the logistic map [1] and the Feigenbaum constant and bifurcation [2] are associated with the internal structure of proton. Due to the two spacetimes, we should change interpretation of the quantum mechanics. New interpretation leads to nonlinearity and shows how we can eliminate it. The logistic map and structure of baryons The logistic map we can write as follows [1] xn+1 = kxn(1 - xn). (220) Assume that the control parameter k is 1 for radius of the gluon loop from which the lightest sham quark arises i.e. for A/3 is k = 1 (A = 0.6974425 fm is the radius of the equator of the torus inside the core of baryons). Then, for the gluon loop from which the third basic sham quark arises, i.e. the core of baryons, is k = 3. Assume also that the xn is the distance r from the centre of the point mass in the centre of the core of baryons and for A is xn = 1. After the gluon-loopthird-basic-sham-quark transition, the energy released in collisions of baryons appears first of all as the large loops on the circular axis of the torus in the core of baryons. The large loops are responsible for the strong interactions of mesons whereas the binary systems of the large loops (i.e. the pions) are responsible for the strong interactions of baryons. For the circular axis is xn = 2/3. We can see that the xn = 2/3 is the attractor for k = 3. For k < 1, the point mass attracts the surplus energy/mass so the xn = 0 is the attractor for the k < 1. 100 To conserve the spin of the core, the large loops cross the equator of the core of baryons as the binary systems of the large loops with antiparallel spins i.e. as the pions or are open. To conserve in strong fields the symmetrical fusions/decays, the bosons appear as the groups containing 2 d bosons, where d = 0, 1, 2 and 4. The Feigenbaum constant and bifurcation code the structure of proton Outside the core of baryons is obligatory the Titius-Bode law for the strong interactions xn = A + dB, (221) where d = 0, 1, 2, 4 whereas B = 0.50184 fm. In the d = 4 state, the carrier of the strong interactions (it is group of eight loops) decays to 16 gluons. Calculate the range of a gluon ball which mass is equal to the mass of the gluon loop from which arises the most heavy basic sham quark i.e. the sixth basic sham quark (mass = 2821 MeV). Mass of the gluon ball is (X + Y)/mH(+) = 1.020593 times greater than the mass of a sham quark because during the sham quark creation is emitted the weak binding energy. In the last formula the X = 318.2955 MeV is the mass of the torus inside the core of baryons whereas mH(+) = 727.440 MeV is the mass of the core of baryons. This means that the mass of the gluon ball is 2879 MeV. Since range of mass equal to mS(+,-),d=4 = 187.573 MeV is 4B = 2.00736 fm then range of the gluon ball is Δr = 0.13078 fm. If such gluon ball arises on the equator of the torus in the core of baryons and its motion is radial then it transforms into the sham quark in distance r from centre of the point mass where r = A + Δr i.e. r = 0.82822 fm. Since k = 1 for A/3 then for the r = 0.82822 fm we obtain k = 3.5625 whereas the Feigenbaum bifurcation, for the cycle 2 n = 16 leads to k = 3.5644. Because the equator of the sixth sham quark overlaps with the last orbit for the strong interactions then we can say that the k = 3.5625 is some analog to the upper limit for the strong interactions. I should emphasize also that the set of numbers d = 0, 1, 2 and 4 for strong field is characteristic for a period-doubling cascade. In the strong fields most important are facts that the symmetrical decays are the preferential decays and that range is in inverse proportion to mass of a particle. This leads to the period-doubling cascade. The four-neutrino symmetry leads also to n = 3, 6, 12 period-doubling cascade whereas for the neutron black holes is d = 1, 2, 4, 8, 16, 32, 64, (for binary system is 96 too) and 128. In the d = 4 state is the gluon-photon transition (more precisely, in distance 4πA/3). The carriers of the photons and gluons interact weakly. Due to the ratio of the coupling constants for the weak interactions of proton and electron Xw = w(proton)/w(electron-muon) = 19,685.3, the radius of the Bohr orbit is in approximation Xw times greater than the last orbit for the strong interactions A + 4B ≈ 2.7 fm. In biology and chemistry most important are the electromagnetic interactions so to solve some problems which appear in these two fields of knowledge we must know the internal structure of protons, electrons and fields. Nonlinearity appears when we do not take into account the local binding energies. Can the internal structure of proton lead to the Feigenbaum constant δ = 4.669201609? The core of proton consists of the point mass, torus and by analogy to the source of the radiation mass of an electron, of an electron-positron pair and its electromagnetic mass that appears in interactions. When we neglect the binding energies then mass of the core of proton (all baryons) is mcore,chaos = Y + X + 2melectron(1 + αem) = 743.4498 MeV. Mean mass of the relativistic pions in the d = 1 state we can calculate from following formula MW,d=1,mean = mW(o),d=1y + mW(+),d=1(1 - y) = 212.1417 MeV. (222) In interactions, in the d=1 state, there appears additional electromagnetic energy, not associated with a binding energy, equal to Δmem = (mW(+),d=1 - mW(o),d=1)(1 - y)αem = 0.025535 MeV. (223) The mean energy in the d = 1 state not associated with the core is Z = MW,d=1,mean + Δmem = 212.1673 MeV. (224) Calculate following ratio 101 (mcore,chaos/Z)/(X/Y) = 4.66913 MeV. (225) In the numerator is the ratio of the mass of the two parts of the proton as a whole (core and relativistic pion) whereas in the denominator is the ratio of the mass of the two parts of the core of proton (torus and point mass). We can see also that the nominators in both nominator and denominator contain the mass of torus associated with the electric charge. Due to the two spacetimes, the internal structure of proton ‘leaks’ outside the strong field of proton – this is due to the carriers of the gluons and photons i.e. due to the binary systems of the neutrinos the Einstein spacetime consists of. We can see that the calculated ratio is close to the Feigenbaum constant. The ‘leaking’ structure of proton (the leaking information concerning the internal structure of proton) causes that different systems behave identically (qualitatively/structurally and quantitatively/metrically) – this leads to the Feigenbaum universality. We should notice also that the ratio of the mass of the torus X, i.e. the mass of the source of the strong interactions, to the mass of the carrier of the strong interactions for mesons, i.e. the mass of the large loop (67.5444 MeV), is close to the Feigenbaum constant and is 4.71. Notice also that 3 + Y/mcore,chaos = 3.5705 whereas Y/(mcore,chaos + Z) = 0.4438. For real proton is Y/mproton = 0.4502. The last two results are close to the exponent β = log2/logδ = 0.4498 applied in the renormalization group theory. Mandelbrot set Impulses of electric current create concentric loops composed of the Einstein spacetime components i.e. the binary systems of neutrinos (the weak dipoles). A loop is stable when spins of the weak dipoles are tangent to the loop. Weak mass of virtual particles produced by a loop we can calculate from formula mw = αwm, where m is the mass of a loop whereas αw is the coupling constant for the weak interactions. For example, the weak mass of virtual particles produced by the large loop is equal to the distance of mass between the neutron and proton. Due to following formula, a larger loop creates smaller loop, and so on αw,n+1 = Gw(αw,nm) 2 /(ch) + C1, (226) where Gw is the weak constant whereas C1 is a constant which follows from entanglement of the components of the loops – they exchange the binary systems of the closed strings the neutrinos consist of. Field composed of groups of such sets composed of the concentric stable loops is the fractal field. Physical properties of such field we can describe applying the imaginary unit i = sqrt(-1). There appear the polar form of complex numbers, i.e. the imaginary unit and the sine and cosine, and second power of moduli of the complex numbers i.e. the quadratic functions. We can rewrite formula (226) as follows αw,n+1 = C2(αw,n) 2 + C1. (227) This relation is an analog to the Mandelbrot map zn+1 = zn 2 + C. (228) It is iteration on the complex plane of following type: take a complex number z, calculate its second power and add an initial number C, and so on. The 3-space-dimensional fractals produced, for example, by brains I refer to as the ‘solitons’. Creative thinking leads to phase transitions of smaller ‘solitons’ to greater ‘solitons’. Next, there is period of rebuilding of the ‘solitons’ containing false fragments. Such period can last for very long time. Types of mechanics, elimination of nonlinearity We know that mechanics of chaos is the nonlinear mechanics. There is the very good description of the transition from the classical mechanics (we know all trajectories) to statistic mechanics (the phase spaces contain averaging parameters also). Whereas due to the lack of the correct description of the internal structure of spacetime(s), the description of the transition from the statistic mechanics to quantum mechanics is not good. The Everlasting 102 Theory leads to two spacetimes. The fundamental spacetime, i.e. the Newtonian spacetime, is practically the scalar spacetime and is statistical whereas the Einstein spacetime composed of the weak dipoles, i.e. of the binary systems of neutrinos, is the quantum spacetime. Due to the scalar/statistic spacetime, particles, which arise in the quantum spacetime, disappear in one place and appear in another and so on. Sometimes the quantum particles arise in places very distant from the places of disappearing. This means that trajectories of quantum particles have no sense in the quantum mechanics. To describe ‘motions’ of the quantum particles such as, for example, electrons and photons we need the wave functions and probabilities. What is the origin of the linearnonlinear transition? The Newtonian gravity is linear because is associated only with the scalar spacetime. In such spacetime quantum particles cannot appear. Nonlinearity is associated with the spacetime composed of the weak dipoles. Properties of this spacetime cause that superposition is not characteristic for the Einstein gravity. This is due to the internal structure of the virtual bare particles and local binding energies that locally change mass density of the spacetime composed of the weak dipoles. We can see that the locally changing mass density leads to the nonlinearity of the metric tensor in the Einstein equations. Since the metric tensor defines geometry of spacetime then geometry of spacetime depends nonlinearly on mass density. Similarly is for the weak, strong and electromagnetic interactions because they are associated with the quantum spacetime. The changing local mass densities lead to the mechanics of chaos. When we take into account the internal structure of bare particles and appropriate binding energies, sometimes we can reject the perturbation theory. Applying such mechanism, I formulated new theory of interactions. Summary The atom-like structure of baryons described within the Everlasting Theory leads to the logistic map and Feigenbaum constant and bifurcation applied in the theory of chaos. The internal structure of proton ‘leaks’ outside it due to the carriers of the gluons and photons i.e. due to the binary systems of neutrinos the Einstein spacetime consists of. The ‘leaking’ structure of proton causes that different systems behave identically – this leads to the Feigenbaum universality i.e. the Feigenbaum scaling is the same for many functions (for example, xn+1 = kxn(1 - xn) and xn+1 = rsinπxn) and processes. We can say that nature ‘chooses’ such functions some phenomena were in resonance with the internal structure of proton. Information of the structure of proton leaks due to the virtual structures composed of the entangled Einstein spacetime components. They are the ghosts of protons and they carry the negative degrees of freedom i.e. due to the entanglement, the virtual structures absorb surplus energy. This causes that we can apply the renormalization group theory so the Feigenbaum scaling also. We can eliminate the renormalization group theory via the correct internal structure of the bare particles and local binding energies. Impulses of electric current create concentric loops composed of non-rotating-spin binary systems of neutrinos with spins tangent to the loops. Entanglement of groups of such sets composed of the particular loops leads to the Mandelbrot-like set. Chaos is due to the lack of the initial synchronization with the internal structure of protons and the four-neutrino symmetry. The attractors appear because a system wants to synchronize its behaviour with the Universe/nature. Due to the two spacetimes, trajectories of quantum particles have no sense. The more fundamental spacetime, i.e. the Newtonian spacetime, is statistical/deterministic whereas the second, i.e. the Einstein spacetime, is quantum/nondeterministic and leads to the free will. Due to the interactions of the deterministic and nondeterministic fields, the quantum/nondeterministic fields try to behave in deterministic way. Nondeterministic behaviour appears sporadically only when deterministic behaviour is broken. 103 Nonlinearity follows from the locally changing binding energy. We can eliminate nonlinearity when we take into account internal structure of bare particles and appropriate binding energies. For example, we can calculate the emitted binding energy by electron or muon due to the electroweak interactions of the virtual electron-positron pair(s) with the bare electron or muon. There are two methods to calculate the magnetic moment of electron: via the Feynman diagrams or via internal structure of bare electron and local binding energies. The first method is nonlinear whereas the second is linear and very simple. Due to the local phenomena that follow from nonlinearity, the nature drifts towards linearity. When we neglect the local phenomena then geometry of spacetime and other fields depends nonlinearly on mass density. We cannot eliminate the nonlinearity from mathematical description of a system in which local binding energies behave in unforeseeable manner and the system cannot emit them at least partially. But even then, detected noise carries some information about mean values of the local binding energies. Sometimes the mean values change over time in unforeseeable manner. Then, prediction of behaviour of such system is impossible. When a system cannot eliminate the nonlinearity via emission of the local binding energies turbulence appears. Turbulence is a disorder without rules. Chaos is an ordered disorder via simple processes/rules. Attractors appear due to convergent lines of forces, period-doubling cascades appear due to symmetrical decays of particles whereas 3D fractals due to cascades of smaller and smaller loops. The purposeful causes are typical only for free will. The matrices of the DNA arose before the ‘soft’ big bang and are composed of many of the four different weak dipoles (they are the carriers of the photons and gluons). Some ‘purposeful behaviour’ of many systems follows from the ‘leaking’ internal structure of proton and the coded information in the DNA matrices. References [1] Weisstein Eric W., ”Logistic Equation” from MathWorld [2] Feigenbaum Mitchell, Universal Behaviour in Nonlinear Systems, “Los Alamos Science” 1 (1981) [3] Mandelbrot Benoit, Fractals and the Rebirth Iteration Theory in: Peitgen Heinz-Otto, Richter Peter H., The Beauty of Fractals, p. 151-160, (Berlin: Springer-Verlag, 1986) 104 Theoretical Curve for the Kaon-to-Pion Ratio Some experimental data leads to the atom-like structure of baryons. This theory leads to following conclusions. There is core composed of the torus with point mass in its centre. The structure of the torus leads to the laws of conservation of the electric charge and spin. There appears internal helicity of the torus. Positive electric charges as, for example, of proton, positron and positive pion, have the left internal helicity (this concerns the core of neutron also) whereas the negative electric charges and antineutron have the right internal helicity. The gluon loops or pions or other bosons carry the strong interactions. Pions are the binary systems of gluon loops whereas the kaons are the binary systems of binary systems of gluon loops so they are the quadrupoles. Strong field has internal helicity the same as the torus in the core. Since the kaons are the binary systems, so the produced kaons always have the resultant internal helicity the same as the torus in the core i.e. the helicity does not depend on electric charge of kaons. The kaon-to-pion ratio mostly depends on internal helicity of the electric charge inside pions. Experimental data that concern the kaon-to-pion ratio are collected on following website [1]. Characteristic values for the kaon-to-pion ratio Number of produced particles is inversely proportional to their mass. For the K/π ratio is K/π = mi/mkaon(+-) , (229) where mi is the mass of loops composed of gluons or structures composed of gluon loops whereas mkaon(+-) is the mass of the charged kaon. When energy of collisions increases then there are produced more and more of more energetic gluons, loops and structures. Pions are the binary systems of gluon loops and mass of each loop for resting pion is 67.54 MeV and consists of two gluons. Each such gluon carries energy equal to 33.77 MeV. Mass of charged pion is 139.57 MeV. The mass of pion leads to the coupling constant for the strong interactions of the non-relativistic nucleons αs NN = 14.4. For a very short period of the K and π production in the nucleon-nucleon collisions, the produced nucleon-antinucleon pairs are in the rest. The strong masses of the charged pion and kaon we can calculate multiplying their masses by the coupling constant. For the charged pion, we obtain about sqrt(s) = 2 GeV and it is the started point of the curve for the K/π ratio. For the charged kaon we obtain sqrt(s) = 7.1 GeV. A kaon is the binary system of binary systems of loops so it is quadrupole of loops. Masses of the gluon loops the resting kaons consist of are greater than in the resting pions. There is obligatory the four-neutrino symmetry for the gluons so there arise particles containing following numbers of gluons x x = 2·4 d , (230) where d = 0, 1, 2, 4, 8… We can see that for energies lower than 7.1 GeV the pions and kaons arise from the single loops (x = 2 for d = 0). When the energy of collisions increases then there arise more and more the more energetic gluons from which the kaon loops arise. For energies higher than 7.1 GeV pions are produced from single loops (mi = 67.54 MeV) whereas kaons are produced at once as the quadrupoles (x = 8 for d = 1; there are eight different gluons carrying appropriate energies to produce the kaon loops). This leads to K/π = 67.54/493.7 ≈ 0.14 and it is the asymptote for positive and negative particles (the black basic curve on the figure). To obtain the real curve we must take into account also the helicity of electric charge inside pions. The helicity of charge of the negative pions are opposite to the colliding nucleons so for the threshold energy for kaons, i.e. 7.1 GeV, they are produced from the gluons which carry energy equal to mi = 33.77 MeV. This means that for energy sqrt(s) = 7.1 GeV for the negative particles should be K/π = 33.77/493.7 ≈ 0.07. We can see that the curve K/π = f(sqrt(s)) is lowered in relation to the basic (black) curve and has small maximum for the threshold energy. The helicity of charge of the positive pions is the same as of the 105 colliding nucleons so they arise at once as the positive pions. This means that for the threshold energy for the positive particles should be K/π = 139.57/493.7 ≈ 0.28. We can see that the curve K/π = f(sqrt(s)) is elevated and there appears the big ‘horn’. Summary The atom-like structure of baryons leads to the two curves K/π = f(sqrt(s)) consistent with experimental data. On the figure are collected the obtained theoretical results. The division of the basic (black) curve follows from the different helicities of electric charges of pions (left helicity for positive pions and right for negative pion) in relation to the helicities of the colliding nucleons (left helicity). We can neglect the helicities of charges of the kaons because they are the binary systems. In such systems appears additional spin speed that causes that the total helicity is always the same as of the colliding nucleons. References [1] http://en.wikipedia.org/wiki/File:Strange_production_7.gif 106 The Cross Section for Production of the W Boson Here I will show how from the atom-like structure of baryons follows the cross section for production of the W boson as a function of collision energy. We know that cross section is inversely in proportion to square of mass of created particle. For the mass of proton in the rest, the cross section for the weak interactions is the equatorial cross section of the point mass of the proton. Then, for the W boson, for collision energy equal to the mass of the W boson (this theory leads to 79.4 GeV – see formula (119) or 80.38 GeV – see discussion below formula (246)), is σW(mW = 80.38 GeV) = πrp(proton) 2 /(mW/mproton) 2 = 0.3248 nb, (231) where rp(proton) = 0.8711·10 -17 m is the radius of the point mass of proton (see formula (49)) whereas mW is the mass of the W boson. When energy of collision increases then increases the radius of the point mass so the cross section also. Cross section is in proportion to the equatorial cross section of the point mass whereas the volume of the point mass is in proportion to collision energy E. This means that there appears following factor f f = (E/mW) 2/3 . (232) We can see that the formula for the mean cross section for production of the W + and W - bosons as a function of collision energy looks as follows σW,mean(E[TeV]) = πrp(proton) 2 (E/(mW/1000)) 2/3 /(mW/mproton) 2 = 1.744·E 2/3 nb. (233) This is the mean value for the W + and W - bosons. Inside the core of baryons appear the sham quark-antiquark pairs which carry following electric charges: ±1/3 and ±2/3 (see Chapter “Reformulated Quantum Chromodynamics”). The electric charge of the core of proton is +1. The charge helicities of the W + boson and proton are the same so the W + boson is associated with the greatest positive electric charge i.e. the +1. The charge helicities of the W - boson and proton are opposite so the W - boson is associated with the absolute value of the -2/3. We know that involved energy is in proportion to the absolute value of the electric charge. This leads to following formula (σW+ + σW-)/2 = σW,mean , (234) where σW- = (2/3) 2/3 σW+. This leads to conclusion that in the formula for the cross section for the W + boson appears the factor g1(+) = 1.13434 whereas for the W - boson the factor g2(-) = 0.865662. For the total cross section for the proton-proton collisions appears the factor g3(±) = g1(+) + g2(-) = 2 whereas for the proton-antiproton collisions the factor g4(±) = 2g1(±) = 2.26868. Now, the formula for the cross section looks as follows σW[nb] = giσW,mean(E[TeV]) = 1.744·giE 2/3 , (235) where i = 1, 2, 3 or 4. The formula (235) is not a final formula. For mesons carrying mass close to the proton (for example, ω(782)) or for nuclei composed of such mesons (for example, Υ(9460 MeV)), the cross section should be close to the equatorial cross section of the point mass of the proton in the rest i.e. about 2.4·10 -3 mb. When energy of proton increases then emitted energy also increases and for energy in approximation 18 TeV is 100 % (see Chapter “Interactions”). Then, the radius of the point mass of proton is equal to A/3 and it is the radius of the gluon loop from which the first basic sham quark is produced. Masses of the sham quarks are in proportion to their radii so energy emitted by relativistic proton is in proportion to radius of the point mass of proton. This means that ability to production of the W bosons increases with energy of collision and is equal to one for 18 TeV. To obtain correct value for the cross section, we must multiply the formula (235) by following function Br = rpoint-mass/(A/3) = (E[TeV]/Eo[TeV]) 1/3 , (236) where Eo = 18 TeV. The final formula for the cross section looks as follows 107 σW[nb]×Br = giσW,mean(E[TeV])Br = 0.6655·giE , (237) This is the linear function. We can see that the cross section for 7 TeV is 3.5 times greater than for 2 TeV. For 7 TeV, for the W + boson we obtain 5.3 nb whereas for W - boson 4.0 nb and it is consistent with experimental data. Summary The figure entitled “The cross section for production of the W boson as a function of collision energy” contains the obtained theoretical results. 108 Neutrino Speed The data in this paper [1] lead to the atom-like structure of nucleons. The Schwarzschild radius for the strong interactions is 1.4 fm. From the Uncertainty Principle follows, that such is the range of the neutral pions produced in centre of the baryons. Assume that the muons, pions and W bosons (denote their mass by m) arise in the centre of the core of nucleons as the entangled gluon-ball quadrupoles. Such a quadrupole can be entangled with a neutrino (denote its mass by mneutrino) on, for example, the Schwarzschild surface. This means that the centrifugal force is directly proportional to the product 4m·mneutrino, where mneutrino 109 so we can apply to them the Newton’s second law. The Newton’s second law we can write for neutrinos as follows mneutrinoΔvneutrino = Fneutrino · tint. (239) The strange quark-antiquark pairs and next the muon-antimuon pairs arise in the centre of the core of baryons as the gluon-ball quadrupoles i.e. as the quadrupoles of pure energy. This means that such objects are also the non-relativistic objects so we can apply to them the Newton’s second law. The force acts on the carriers of gluons i.e. on the Einstein spacetime components. They are the neutrino-antineutrino pairs i.e. the weak dipoles carrying spin equal to 1 so they are the non-relativistic particles also. Speed of entangled weak dipoles is equal to the c. From the formulae (238) and (239) we obtain that the increase in the radial speed of neutrinos that appear in the beta decays is vneutrino – c = Δvneutrino ~ 4{mneutron – (mproton + melectron)} 2 = 4M 2 . (240a) The increase in the radial speed of the neutrinos appearing in the weak decays of the exchanged gluon-ball pairs is vneutrino – 0 = Δvneutrino ~ 4m 2 , (240b) where m is mass of gluon ball which decays due to the weak interactions. Energy of such gluon balls can be equal to the mass of muons or to the one fourth of the mass of the core of baryons or to the mass of the W bosons. Due to the weak interactions of the neutrinos with the gluon balls, the neutrinos appearing in the beta decays and the neutrinos appearing in the decays of the gluon balls must have the same resultant speed. From formulae (240a) and (240b) we obtain (vneutrino – c)/vneutrino = (M/m) 2 . (241) Since vneutrino ≈ c then in approximation is (vneutrino – c)/c = (M/m) 2 , (242) or vneutrino = {1 + (M/m) 2 }c. (243) To the gluon balls we can apply the theory of stars. The theory of stars leads to conclusion that lifetime T is inversely proportional to four powers of mass, i.e. T ~ 1/m 4 , so we can rewrite the formula (242) as follows We can see that we can calculate the neutrino superluminal speeds both from masses of particles (formula (242)) or from their lifetimes (formula (244)). Both methods lead to the experimental data. From the Uncertainty Principle and the invariance of the neutrino mass follows that the square of the change in neutrino speed is inversely proportional to the time of exchange t. On the other hand, from formula (240a) and the relation T ~ 1/M 4 follows that similar relation is for the lifetime T. This means that the interval for the broadening of the time of exchange t, i.e. the (t/2, 2t) leads to following conclusions. To obtain the maximum neutrino speed, we must multiply the central value for an increase in neutrino speed in relation to the c, i.e. the Δv/c = (v – c)/c, by sqrt(2) i.e. vmaximum = (1 + Δv·sqrt(2)/c)c. For the minimum speed we obtain vminimum = (1 + Δv/(sqrt(2)c))c. The theoretical results are the central values whereas in the round brackets we will write the increases in speed for the maximum neutrino speed. For lower energies, such as in the MINOS experiment [4], there are mostly the neutrinos from the decays of neutrons and gluon-ball pairs carrying energy equal to the mass of the muon-antimuon pairs. The ratio of the lifetime of neutron to lifetime of muon is smallest (882/2.20·10 -6 = 4·10 8 [2]) so the obtained neutrino speed is the upper limit. From formula (243) follows that for the more precise MINOS experiment, for the neutrino speed we should obtain 1.000050(21)c i.e. the maximum neutrino speed should be 1.000071c. For higher energies, such as in the OPERA experiment [5], there are mostly the neutrinos from the weak decays of the neutrons and gluon-ball pairs carrying energy equal to the half of 110 the mass of the core of baryons. Mass of one gluon ball is 181.7 MeV. This means that lifetime of such gluon ball which decays due to the weak interactions at once into 3 neutrinos and electron, is 8.74 times shorter than lifetime of muon. This leads to conclusion that the neutrino speed is 1.0000169(70)c i.e. the maximum speed is 1.0000239c so the time-distance between the fronts of the neutrino and photon beams is 58.4 ns. For highest energies, such as in the explosions of the neutron cores of supernovae, dominate the neutrinos from the decays of the neutrons and gluon-ball pairs carrying energy equal to the mass of the W boson-antiboson pairs. The distance of mass between the point mass and the torus in the core of baryons is equal to the mass of muon whereas the mass of the point mass, which is responsible for the weak interactions of baryons, is 4 times greater than the muon. The quadrupole symmetry shows that creation of systems containing 4 elements is preferred. This means that the lifetime of the muon is characteristic also for the point mass (i.e. 424 MeV = 4·105.7 MeV – each one of the four muons lives 2.2·10 -6 s [2]). This leads to conclusion that lifetime of the W bosons (mass = 80,400 MeV [2]) which decay due to the weak interactions is Tlifetime-W-boson = 2.2·10 -6 s/(80,400/424) 4 = 1.7·10 -15 s. This leads to following neutrino speed 1.0000000014(6)c i.e. maximum speed is 1.000000002c (i.e. (1 + 2·10 -9 )c). This result is consistent with the observational facts [6]. The time-distance Δt between the fronts of the neutrino and photon beams, measured on the Earth for the SN 1987A, should be Δt = 168,000 ly · 365 days · 24 hours · 2·10 -9 = 3 hours. If before the explosion, the mass of the SN 1987A was close but greater than four masses of the Type Ia supernovae, i.e. greater than 5.6 times the mass of the sun, then due to the quadrupole symmetry, during the gravitational collapse, there could arise the system containing 4 the Type Ia supernovae. After simultaneous explosion of the 4 supernovae, we should not observe there a remnant i.e. neutron core. Due to gravitational collapse, a supernova transforms into neutron star. The collapse decreases pressure inside the neutron star that forces the inflow of the dark energy into the star. Next, there are the beta decays of the neutrons and nuclear fusions of the nucleons. These two processes appear simultaneously. The additional dark energy and released binding energy cause the explosion of the neutron star. This means that neutrinos and photons appear on surface and inside neutron star simultaneously. When mass of a neutron star is equal to mass of the Type Ia supernova then neutrinos and photons appear simultaneously in whole volume of the star. We can see that a supernova that has mass in approximation 5.6 times the mass of the sun practically should not have some plasma layer around the four neutron stars. This means that during the explosion of such quadrupole of neutron stars there should not be a time-distance between the fronts of the neutrino and photon beams. The observed on the Earth the 3-hours delay must be due to the superluminal speed of neutrinos. Limitations in detection of superluminal neutrinos The speed of light c depends on the inertial mass density of the fundamental/Newtonian spacetime. Lower density means higher speed of light. The pressure inside the fundamental spacetime is tremendous about 10 180 Pa. This causes that the fundamental spacetime is exactly flat so the c is constant. The density is lower than the mean only for distances smaller than about 10 -32 m from the neutrinos. Due to this negative pressure, i.e. due to the weak interactions in the low-energy regime, there arise the regions in the Einstein spacetime in which the binary systems of neutrinos are confined. When such regions are sufficiently large, the neutrinos from weak decays of particles in such regions can be superluminal. The Everlasting Theory says that the carriers of the massless photons, i.e. the entangled binary systems of neutrinos the Einstein spacetime consists of, so the photons as well (the entanglement causes that photons, i.e. the rotational energies of the binary systems, are the 111 wave packets), are moving in the Newtonian spacetime with the speed c. The neutrinos in the binary systems of neutrinos interact weakly so the neutrinos are moving almost independently. This means that generally the neutrinos are moving with the same speed as the binary systems of neutrinos. This means that the neutrinos, which have mass, mostly are moving with the speed c. We will never see neutrinos which are moving with speeds lower than the c. Just the General Theory of Relativity is incomplete. The Everlasting Theory shows that we must introduce new term “dominating gravitational gradient” because accelerated particles, besides neutrinos, change their internal structure. The ratio of mass of source of the strong interactions to mass of the carriers of the strong interactions changes as 1/(1 – v 2 /c 2 ) i.e. there appears the running coupling for the strong interactions which value depends on the speed v in the dominating gravitational gradient. The accelerated baryons behave as if with each interacting strongly baryon were associated two different reference frames. This means that an observer in a falling lift in a dominant gravitational field can measure the speed of the lift in relation to the dominating gravitational gradient. We can see that the Principle of General Covariance is strictly correct only for resting masses or moving with the same speed in dominating gravitational gradient. Without a reformulation we cannot unify the General Theory of Relativity with the strong interactions. The Everlasting Theory shows how the unification of gravity with strong interactions looks. For example, the Kasner solution for the flat anisotropic model is correct because it concerns the part of the GR when the Principle of General Covariance is obligatory i.e. the exact solution (0, 0, 1) is for the resting structure in the dominating gravitational gradient. The approximate solution, i.e. (-1/3, 2/3, 2/3), concerns the sham quarks. The exact Kasner solution leads to the core of baryons. The ground state of the Einstein spacetime is invisible for detectors because the Lagrangian for this state cannot change. For this state, the total energy and speed c are constant. This means that the ground state behaves as an empty spacetime, with no matter. These properties cause that the Kasner solution describes the real phenomena. The Everlasting Theory shows that the neutrinos are the non-relativistic particles (i.e. their mass does not depend on their speed) so sometimes in the special conditions they can be the superluminal particles. Such neutrinos appear when the weak decays take place inside the strong fields inside baryons containing regions in which the Einstein spacetime components are confined. Total volume of the regions containing the confined components increases when energy of baryons per collision increases. Due to the atom-like structure of baryons, there is the natural broadening in the spectrum of the superluminal speeds of neutrinos. In the neutrino-speed spectrum for the neutrinos obtained due to the collisions of nucleons there should be the “luminal” peak associated with the speed equal to the c and there should be the naturally broadened superluminal peak separated from the luminal peak. The Everlasting Theory shows why neutrinos have such “strange” properties. For the “stairs” presented in the Fig. titled “Dependence of speed of neutrinos on their energies for collisions of nucleons”, for the lower superluminal speeds the y is greater whereas the x smaller. This is because mean distances between the strong fields of the nucleons are smaller for higher energies so probability of weak decays inside the strong fields increases. The w is quantized (see Fig. titled Dependence of speed of neutrinos on their energies for collisions of nucleons). In the collisions of the superluminal neutrinos with the Einstein spacetime components the momentum of the components mc cannot change. There can change the rotational energies. This means that the superluminal speeds are conserved. The superluminal speeds can change in the exchanges of the superluminal neutrinos on the neutrinos in the binary systems of neutrinos the Einstein spacetime consists of but such “oscillations” are the very rare processes. 112 Summary The calculated neutrino speed for the MINOS experiment is 1.000050(21)c. The maximum neutrino speed is 1.000071c. The calculated time-distance between the fronts of the neutrino and photon beams for the OPERA experiment is 58.4 ns whereas the neutrino speed is 1.0000169(70)c i.e. maximum neutrino speed is 1.0000239c. The calculated time-distance between the fronts of the neutrino and photon beams, observed on the Earth, for the supernova SN 1987A is 3 hours whereas the neutrino speed is 1.0000000014(6)c. Neutrino speed depends on internal structure of baryons and phenomena responsible for creation of particles that decay due to the weak interactions. The MINOS and OPERA experiments and the data concerning the supernova SN 1987A suggest that there is in existence an atom-like structure of baryons. In MINOS dominated neutrinos from decays caused by gluon-ball pairs which energy is two times greater than the mass of muon. In OPERA dominated neutrinos from decays caused by gluon-ball pairs which energy is two times smaller than the mass of the core of baryons whereas in the supernova SN 1987A 113 explosion by gluon-ball pairs which energy is two times smaller than the mass of the W bosons. We can calculate the neutrino speed for the MINOS experiment also in different third way. Neutrons and muons decay due to the weak interactions. From formula (51) follows that coupling constant for weak interactions is in proportion to square of exchanged mass whereas theory of stars leads to T ~ 1/M 4 . This means that square root from lifetime is inversely proportional to coupling constant so applying also formula (57) we can rewrite formula (244) as follows Xw = αw(beta-decay)/αw(decay-of-muon) = c/(vneutrino – c) = 19,685.3. (245) From this formula we obtain vneutrino = c(Xw + 1)/Xw = 1.0000508c. (246) Due to the weak interactions, the mass of the electron-positron pair can increase the Xw times whereas the resultant mass due to the quadrupole symmetry can increase the four times. The final mass is 80,473 MeV and it is the mass of the W boson. to the superluminal neutrinos. Their masses are as follows. The 4{mneutron – (mproton + melectron)}, in approximation the mass of the point mass in the centre of the core of baryons 424 MeV and mass of the core of baryons 727 MeV, and the mass of the W boson 80,473 MeV. For the bare mass of the pair we obtain 2·0.510407 MeV·19,685.3·4 = 80.380 GeV. This theoretical result is consistent with the experimental data [2]. There is the relativity of lifetimes for entangled particles. To free a neutron from an atomic nucleus is needed the mean energy equal to the volumetric binding energy 14.952 MeV (see the description concerning formula (183)). On the other hand, we know that the binding energy of the mass X and Y is 14.98 MeV. This energy is close to the volumetric binding energy so the free neutrons can be entangled with the volumetric binding energy i.e. the bound neutrons can simultaneously interact with energy two times higher than the volumetric binding energy. From relation T ~ 1/m 4 and formula (95) follows that lifetime of neutron entangled with the volumetric binding energy 14.97 MeV is 888 s. Similarly, the distance of mass between the two charged states of the core of baryons is 2.67 MeV. When a muon is entangled with such energy then its lifetime is 2.21·10 -6 s. References [1] https://www.worldscientific.com/etextbook/5500/5500_chap0.1.pdf. [2] K. Nakamura et al. (Particle Data Group), J. Phys. G 37, 075021 (2010) [3] Feigenbaum Mitchell, Universal Behaviour in Nonlinear Systems, “Los Alamos Science” 1 (1981) [4] P. Adamson et al. (MINOS Collaboration) (2007). "Measurement of neutrino velocity with the MINOS detectors and NuMI neutrino beam". Physical Review D 76 (7). arXiv:0706.0437. [5] OPERA Collaboration, T. Adam et al. (2011), “Measurement of the neutrino velocity with the OPERA detector in the CNGS beam”, arXiv:1109.4897 [hep-ex]. [6] K. Hirata et al., Phys. Rev. Lett. 58 (1987) 1490; R. M. Bionta et al., Phys. Rev. Lett. 58 (1987) 1494; M. J. Longo, Phys. Rev. D 36 (1987) 3276. 114 M-theory We cannot formulate an useful M-theory without the phase transitions of the fundamental neutrinos, cores of baryons and the protoworlds after the period of inflation. M-theory: My Everlasting Theory is some extension of the useful M-theory. Within the non-perturbative Everlasting Theory, I described internal structure and behaviour of all types of closed and open loops/strings. There are the bosonic and fermion loops/strings. There is something beyond the useful M-theory i.e. the Titius-Bode law for the strong and strong gravitational interactions. Fundamental bosonic string theory: All particles consist of the binary systems of my closed strings. The phase space of such systems contains 11 elements but we can reduce it to 10 elements because the distance between the closed strings follows from the internal structure of the closed string. The distance is π times greater than the thickness of the string. We can see that the binary system of closed strings (spin=1) and the bi-dipole of the closed strings (spin=2) are the bosons so the fundamental string theory is not the superstring theory. But it consists of the fermions. There arise at once the binary systems because the internal helicity of the created systems must be equal to zero. Then, the quantum fluctuations in the fundamental spacetime are reduced to minimum. The superstring theories, i.e. theories that describe simultaneously the fermions and bosons appear on higher level of nature. I showed how to derive the superstring theories from the fundamental string theory i.e. from the Bosonic String Theory. Due to the phase transitions, there appear the three superstring theories. There are the three stable tori/fermions carrying the half-integral spin i.e. the torus of neutrinos, the torus in the core of baryons and the cosmic torus i.e. the Protoworld after the period of inflation (they are the k-‘dimensional’ tori in the M-theory). The tori look as closed fermion strings. Inside them, there arise the bosonic loops. We can see that there appears the supersymmetry i.e. the fermion-boson symmetry. The bosonic loops inside the neutrinos and the cosmic loops cannot be open whereas the large loops produced inside the torus in the core of baryons can be closed or open. The tori of neutrinos and the cosmic object cannot be open whereas the electric charges/tori come open in the annihilations of the pairs. Type I superstring theory is the theory of baryons (typical size is about 10 -15 m) and electrons (~10 -13 m). Type IIA superstring theory is the theory of neutrinos (typical size is about 10 -35 m). In this theory, the closed strings in the binary systems of the closed strings the neutrinos consist of have different internal helicities. This looks the same as in the fundamental bosonic string theory. Type IIB superstring theory is the theory of the protoworlds after the period of inflation (typical size is about 10 24 m). In this theory, the nucleons in the binary systems and in the alpha particles the cosmic objects consist of have the same internal helicities. T-duality: We can see that in approximation the inverse of the geometric mean of the typical sizes for the Type I and IIA superstring theories is equal to the typical size for the Type IIB superstring theory. Moreover, the transition from the Type IIB superstring theory to the Type IIA superstring theory was the cause of the ‘soft’ big bang. Heterotic E8×E8 theory: The ground state of the Einstein spacetime consists of the nonrotating-spin binary systems of neutrinos. There are the 4 different binary systems. They are the carriers of the photons and gluons. There is one type of the two photons, i.e. the left- and right-handed, and 8 types of gluons. Due to the four-neutrino symmetry, the next greater object than the 8 different gluons should contain 8 · 8 = 64 gluons. This means that the heterotic E8×E8 theory follows from the fundamental bosonic string theory and the Type I 115 superstring theory. In the Everlasting Theory nomenclature, the objects containing the 64 gluons are the chains. They can be the open or closed loops or quadrupoles. Heterotic SO(32) theory: There are the 4 different carriers of the photons and gluons. This means that due to the four-neutrino symmetry, the next greater object than the 4 different carriers should contain 4 · 4 = 16 binary systems. But there are the virtual particle-antiparticle pairs so we must multiply this number by 2. Then we obtain the 32 binary systems. We can see that the heterotic SO(32) theory follows also from the fundamental bosonic string theory and the Type I superstring theory. In the Everlasting Theory nomenclature, the objects containing the 32 gluons are the binary systems of supergroups. They can be the particleantiparticle pairs. In the heterotic theories, there are the binary systems of neutrinos in which the neutrinos have the same or opposite internal helicities (see Table 8). This follows from the fact that the two neutrinos have to differ by the sign of the weak charges. Gravity: In the gravitational fields, there are the non-rotating-spin bi-dipoles of the neutrinos. Their spin is 2 and they are the carriers of the gravitational energy/mass. There is some analogy between the four different neutrinos, which lead to the bi-dipoles, and the four different binary systems of neutrinos in the heterotic theories. This means that the gravity should look similarly as the heterotic theories in the very low-energy limit. S-duality: The Everlasting Theory shows that the Type I superstring theory describes the weak and strong interactions whereas the heterotic SO(32) theory the strong interactions via the gluons. This means that there are in existence similar string theories that vary due to the values of the coupling constants. There are the fermion tori/’loops’ and the boson loops which arise inside the fermion tori. The circular axes of the fermions overlap with the bosonic loops but there is the separation of the bosons/loops from fermions/tori. This causes that we do not need the higher dimensions to describe the internal structure from which the fermion-boson symmetry follows. 116 Perihelion precession of Mercury and Venus The perihelion precession of planets we can calculate applying the Newtonian mechanics and then we can add the correction following the General Relativity. But we obtain very bad result for Venus (about 1075’’, i.e. 1075 seconds of arc, in comparison to the observational fact in approximation 204’’). The Everlasting Theory shows that the perihelion procession of Mercury and Venus as well is associated with the very deep past of evolution of the region where the solar system is located. Under the Schwarzschild surface of the neutron black holes and their associations (see Paragraph “Cosmology of the Solar System and Massive Spiral Galaxies” in Chapter “New Cosmology”) there arose the entangled radiation fields composed of the entangled carriers of photons, emitted in the nuclear fusions, so of the electron-positron pairs also. It was after the Protoworldneutrino transition but before the inflows of the dark energy into the cosmic loop i.e. the early Universe. The region under the Schwarzschild surface refers to the d=0 and d=1 states only i.e. refers to the orbit of Mercury and Venus only. Assume that some radiation mass of Mercury is distributed in a ring that width is the distance between the perihelion and aphelion. Such radiation ring behaves like mass in centre of the sun. This means that gravitational interaction of the abstract radiation mass of Mercury in the centre of the sun with the real radiation ring causes that there appears the spin speed of the radiation ring and this spin speed is the speed of the perihelion as well. On base of these explanations we obtain v 2 perihelion,Mercury = GMradiation,Mercury/RMercury. (247) Due to the interactions of the entangled primordial radiation field with the radiation ring of Mercury, there is the resonance for the angular velocities of these two fields. Venus partially behaves as a single-arm lever. Due to the single-arm lever, for Venus we obtain v 2 perihelion,Venus = v 2 perihelion,Mercury(MMercuryRVenus/(MVenusRMercury)) = av 2 perihelion,Mercury, (248) where MMercury = 3.3022·10 23 kg, RMercury = 5.7909100·10 10 m, MVenus = 4.8685·10 24 kg, RVenus = 1.08208930·10 11 m whereas a = 0.1267432. The square of speed of the perihelion of Venus is directly proportional to the mean radius of orbit (the arm lever) and is inversely proportional to mass of Venus (greater inertia then smaller the speed of perihelion). We can see that there is satisfied following formula vperihelion,Venus = sqrt(a)vperihelion,Mercury = 0.35601vperihelion,Mercury. (249) Calculate the radiation mass of Mercury. Due to the gluon-photon ‘transition’ (strongelectromagnetic transition) there leaks the internal structure of nucleon. By an analogy to formula (79), there should appear following factor g = 2αsαem. The radiation mass of electron is x = melectron – mbare(electron) whereas the radiation mass of proton is equal to the distance of masses between the neutron and proton i.e. y = mneutron – mproton. The proton-neutron transitions are due to the large loops so the αs = 1 (see formula (77). Mradiation,Mercury = gxMMercury/y = zMMercury = 2.2545·10 18 kg, (250) where z = 6.8272·10 -6 . Now we can calculate the vperihelion,Mercury = 5.0973·10 -2 m/s. Calculate the perihelion precession of Mercury per century T(100 years) = 3.155693·10 9 s: φMercury/T[ o /T] = 360 o vperihelion,MercuryT/(2πRMercury) = 0.159153 o = 573.0’’. (251a) For Venus is φVenus/T[ o /T] = sqrt(a)φMercury/T[ o /T] = 204.0’’. (251b) For the observational result for Mercury we obtain for Venus (204.39 ± 0.23)’’. 117 Foundations of Quantum Physics Due to the faster-than-light particles (i.e. the tachyons and binary systems of closed strings) the quantum physics is non-local i.e. points separated spatially (i.e. which cannot communicate in defined time, for example, during the time of decay of a particle, due to exchanges of photons, gluons or subluminal particles) can communicate. The behaviour of the renewable particles shows that the quantum physics is partially unreal, for example, mass of electron (not electric charge) or energy of entangled photon can be simultaneously in many places of space. We can see that existence of the two spacetimes, i.e. the imaginary Newtonian spacetime and Einstein spacetime, leads to the non-locality of nature. The first phase transition of the imaginary Newtonian spacetime leads to the closed strings (spin is half-integral) and the binary systems of closed strings (spin is equal to 1). This causes that nature conserves the spins of particles. The spin equal to 1 of a virtual large loop (mass is 67.5444 MeV) responsible for the strong interactions must be conserved because then they still have the same spin as the carriers of the elementary gluons and photons i.e. the neutrinoantineutrino pairs the Einstein spacetime consists of. The Uncertainty Principle ΔEenergyTlifetime = h defines spin of virtual loop. The loop consists of the binary systems of neutrinos so its mass cannot change. Its spin velocity is perpendicular to the relativistic velocity i.e. vrel 2 + vspin 2 = c 2 . The lifetime of the loop is defined as Tlifetime = 2πr/vspin = 2πr/(c(1 - vrel 2 /c 2 ) 1/2 ) i.e. the lifetime increases when relativistic speed increases. From the Uncertainty Principle follows that then energy of the carriers of the strong interactions decreases. This leads to the running coupling for strong interactions. We can see that the classical definition of lifetime, the invariance of spin and the perpendicularity of the spin and relativistic velocities, lead to the Uncertainty Principle i.e. to the conclusion that indeterminacy in distribution of energy is inversely proportional to lifetime. The Everlasting Theory shows also that the behaviour of the quantum/renewable particles (they disappear in one place and appear in another and so on) causes that there is distribution of energy and mass so to describe such particles we must apply the wave functions and equations in which the distributions can change over time. The resultant wave function for many growing spinning loops is the sum of the constituent wave functions. For a constituent growing spinning loop is x = vradial t + λφ/2π, (252) 118 where v 2 radial + v 2 spin = c 2 . (253) The growing loop accelerates its expansion. We can see that the axis x overlaps with the loop whereas the axes of time t are radial and begin with the loop. For vradial t >> λ is vradial = c (since mvspinr = h then for increasing vradial, so also r, the spin speed decreases) then x = ct + λφ/2π. (254) Since k’=p/h, λ=h/p, 2πν=ω and E=hν=hω we obtain k’x – ωt = φ. (255) Moving rotating-spin loop (the transverse wave) we can describe using following function (see Chapter “Fractal Field”): ψ(x,t) = ae iφ = a(cosφ + isinφ), (256) where φ = k’x – ωt. Define following operators E = ih∂/∂t and p = – ih∂/∂x. We can see that Eψ = ωhψ = ih∂ψ/∂t (257a) whereas ppψ = p 2 ψ = – h 2 ∂ 2 ψ/∂x 2 . (257b) For vrelativistic 119 Foundations of General Theory of Relativity In an inertial reference system, we can define the distance between two neighbouring points in spacetime as the square of interval that is a quadratic form of differentials of co-ordinates ds 2 = dx i dx i (i = 0 (for time co-ordinate), 1, 2, 3). In a non-inertial reference system (there appear fields that curve the spacetime) there appear the dx α dx β as well and some coefficients gαβ (ds 2 = gαβdx α dx β ). In generally, for the 4 dimensions we obtain 16 such coefficients that we can write as a metric of field(s). The coefficients gαβ and gβα are multiplied by the same product dx α dx β so gαβ = gβα and we can reduce the number of the coefficients to 10. The gαβ is a symmetric tensor of rank two. Finally, the metric tensor gαβ is related to the energymomentum tensor Tαβ of the matter distribution by Einstein’s field equations. Due to the noninertial reference systems in the General Theory of Relativity (the GR), in this theory the notion of reference systems has not the same meaning as in the Special Theory of Relativity. There is conclusion that properties of motion of bodies are different in different reference systems. This causes that selection of proper reference system is very important in the GR. To choose proper reference system we must know the internal structure of the two spacetimes and the bare particles. Wrongly selected reference systems lead to the wrong interpretations within the GR. In the GR is neglected the fact that the Einstein-spacetime components are the non-relativistic particles. For the components in a loop we can write following formula v 2 relativistic + v 2 spin = c 2 (259) The inertial mass of the Einstein spacetime components is equal to their gravitational mass. Sometimes the GR is very simple when reference system is properly chosen. A wrongly chosen reference system leads, for example, to conclusion that there is an acceleration of expansion of the Universe. The Everlasting Theory shows that there are satisfied following conditions which lead to the GR. When a carrier of photon loop is moving in spherical gravitational field then its relativistic speed overlaps with a radius of emitter (the distant star) whereas the spin speed is tangent to an orbit in the gravitational field. When directions of the radial/relativistic velocity and spin velocity of a loop are perpendicular then formula (259) must be satisfied. There is different situation when a photon loop overlaps with the equator of the sun (see Figure “Curving of light in gravitational field”). The spin vector of the photon loop rotates in the 120 plane of the figure so the plane of the figure is the plane of polarization of the photon. When the photon loop overlaps with the equator of the sun then there should be v’spin = sqrt(GM/R), where M is the mass of the sun whereas R is the radius of the sun. But it is not true. The radial speed of the photon loop in relation to the distant star (the emitter) cannot be higher than the c so there appears the pivoting point for the plane of polarization. This causes that the spin speed of the components of the photon loop in distance 2R from the pivoting point is two times higher vspin = 2sqrt(GM/R). (260) This spin speed decreases the radial speed of the photon loop in relation to the emitter. Since the resultant speed must be equal to the c so there appears the radial speed in relation to the sun. The plane of polarization of the photon must be perpendicular to the resultant speed c (the electromagnetic waves are the transverse waves) so the radial speed in relation to the sun leads to the rotatory polarization. Radius of a nucleon black hole is rbh = GM/c 2 whereas spin speed of an object in distance r is vspin = sqrt(GM/r). For vspin = c the angle between the planes of polarization must be φ = π/2 that means that the black hole captured the light in distance two times smaller than the radius of the Schwarzschild surface. This condition leads to following formula tg(φ/2) = rbh/r = v 2 spin/c 2 . (261) When the photon loop overlaps with the equator of the sun we obtain tg(φ/2) = 4GM/(Rc 2 ) = 4.244·10 -6 . (262) Then φ = 4.864·10 -4 [ o ]. (263) When we multiply this result by 3600, we obtain the result in the seconds of arc φ = 1.75’’. This result is consistent with the results obtained within the GR and the observational facts. Because the Everlasting Theory leads to the result obtained within the GR, we can say that the Everlasting Theory leads to the GR. The General Theory of Relativity and the Quantum Physics follow from the behaviour and properties of the loops composed of the entangled Einstein-spacetime components i.e. the neutrino-antineutrino pairs. 121 Combination of Quantum Physics and General Theory of Relativity The Quantum Physics and General Theory of Relativity disappear for mass/energy density equal to the Planck critical density. The QP and GR are associated with the properties of the Einstein spacetime. For the Planck critical mass/energy density, the neutrinos decay into the free binary systems of the closed strings i.e. the Einstein spacetime disappears i.e. E = 0. For a black hole that has radius equal to the Planck critical length rbh,critical = Rcritical = λcritical/(2π) is rbh,critical = GMcritical/c 2 = 1.6162·10 -35 m whereas ωcritical = Mcriticalc 2 /h, where Mcritical = sqrt(ch/G) = 2.1765·10 -8 kg. This leads to following formula for mass densities higher than the critical mass/energy density (it is in approximation the density inside a neutrino for the geometric mean of the mass/energy of a non-rotating-spin neutrino (see Chapter “New Big Bang Theory”) 1 – 2πrbh,critical/λcritical = 0, (264a) 1 – ωcritical rbh,cricital/c = 0. (264b) The last formula leads to the definition of the critical mass i.e. Mcritical = sqrt(ch/G). Now we can generalize the Schrödinger equation adding the gravity – (h 2 /(2m))∂ 2 ψg(x,t)/∂x 2 + V(x,t)ψg(x,t) = ih∂ψg(x,t)/∂t. (265) The definitions of the momentum and energy operators are the same i.e. p = – ih∂/∂x and E = ih∂/∂t. We can see that following wave function satisfies the generalized Schrödinger equation ψg(x,t) = ae iφ = a(cosφg + isinφg), (266) where φg = (k’ – 1/rbh)x – (ω – c/rbh)t. (267) The rbh is the hypothetical radius of a black hole that has mass equal to the sum of masses of all objects in the sphere that radius is equal to the distance between the centre of the hypothetical black hole and the object for that the formula (265) is written. Now we can write the generalized formula for energy for vrelativistic 122 General Relativity in Reformulated Quantum Chromodynamics and New Cosmology The Friedman isotropic model leads to a singularity due to the initial simplification that there is the symmetry. In reality, there was the left-handed rotary vortex in the Einstein spacetime so we should consider the flat anisotropic model. In the nature the spatial distances do not disappear for distances approaching zero. This means that there are not in existence singularities of the oscillatory mode as well. But there is an oscillatory mode in the approach to singularity. The strong fields behave similarly as the strong gravitational fields. For both types of fields is in force the Titius-Bode law (r = A + dB) and for both types of fields the ratio A/B has practically the same value 1.39. This means that there should be some tangent points for the General Theory of Relativity and the reformulated Quantum Chromodynamics for interiors of gravitational black holes and cores of baryons i.e. the black holes in respect of the strong interactions. I explained before that inside the core of baryons the Einstein spacetime is flat but due to the properties of the electric/strong charge (the torus), the strong field has internal helicity and due to the shape (the torus and the loops) the strong field is anisotropic but mass density of the strong field is 509 times lower than the Einstein spacetime (see formula (11) and Tables 2a and 7). It looks as the flat anisotropic model in the General Relativity. Within the GR, the flat anisotropic model leads to the form of the metric (Edward Kasner, 1921, [1]) for which the solutions are the same as in the reformulated QCD presented within the Everlasting Theory. From the reformulated QCD follows that the electric/strong charges of the sham quarks and their masses are directly proportional to the radii of the gluon loops from which the sham quarks arise. The electric/strong charges of the basic sham quarks associated with the core of baryons, i.e. the black hole in respect of the strong interactions, are ±1Q/3, ±2Q/3, ±1Q and for the sham quark-antiquark pairs is 0. We can see that the generalized lower and upper limits of the intervals obtained within the anisotropic model [1] can define the electric/strong charges of the basic sham quarks or their pairs produced inside the core of baryons. The intervals [1] and the reformulated QCD show that there are in existence other charges as well. We can see that for the core of baryons, for the basic sham quarks or their pairs, the charge Q is multiplied by following basic numbers 0, ±1/3, ±2/3, ±1, but there can be all numbers from the interval . We know that there is the ternary symmetry for the strong and electric interactions of the torus in the core of baryons (i.e. for the electric/strong charge) and the resultant electric/strong charge of three charges (a, b, c) in a virtual structure in a proton or antiproton must be equal to ±1 (this follows from the law of conservation of electric charge) i.e. a + b + c = ±1. (270) Moreover, due to the flatness and homogeneity of the Einstein spacetime in which the virtual particles arise, the masses of the charges a, b and c, are directly proportional to the radii of the gluon loops from which the sham quarks arise. Thus from the formula for spin for virtual particles (spin = ETlifetime) we obtain that spin of a sham quark is directly proportional to square of its charge. On the other hand, resultant spin of the virtual ternary structures must be equal to 1 i.e. must be the same as the Einstein-spacetime components. These remarks lead to following formula a 2 + b 2 + c 2 = 1. (271) The electric/strong charge equal to 1Q1 relates to the loop that radius is 1A1. This means that due to the shape of the core of baryons and the Titius-Bode low for strong interactions probabilities of creation of following virtual pairs are highest ±1Q/3, ±2Q/3, ±1Q, 123 ±1.72Q, ±2.44Q and ±3.88Q. The only three first virtual pairs concern the core of baryons i.e. the Kasner metric. The formulae (270) and (271) lead to following two basic solutions for virtual ternary structures in proton (+1Q) 0, 0, +1, (272a) –1/3, +2/3, +2/3, (272b) and to following two basic solutions for antiproton (–1Q) 0, 0, –1, (272c) +1/3, –2/3, –2/3. (272d) The Kasner metric [1] is the exact solution of the Einstein equations for ‘empty’ spacetime i.e. in the Everlasting Theory nomenclature such spacetime consists of the non-rotating-spin neutrino-antineutrino pairs moving with speed equal to the c. Such pairs cannot transfer any energy to other systems i.e. we can assume that the ground state of the Einstein spacetime is ‘empty’. The exact solution is 0, 0, 1, i.e. in both cores of proton and antiproton arises a virtual large loop only (mass is ±67.5444 MeV) and simultaneously two virtual loop-antiloop pairs i.e. a quadrupole of loops. Spins of loops in a pair must be antiparallel. For example, there can arise simultaneously two neutral pions and one large loop L i.e. (π o , π o , L) or (virtual π o , real π o , L) or (virtual π + π - , real π + π - , L). Such ternary structures are in the mesonic nuclei (see Chapter “Structure of Particles (continuation)”). We can notice that a virtual/real charged ternary structure consists of five elements, for example, (±1/3, ±2/3, +1) or (±2/3, ±1, +1) or (±1, ±1.72, +1). The last structure does not concern the Kasner metric but satisfies the Kasner solution (272a) and is important in the reformulated QCD that follows from the atomlike structure of baryons. There are two more solutions applied in the reformulated QCD that satisfy formula (272a) also i.e. (±1.72, ±2.44, +1) and (±2.44, ±3.88, +1). For very small t the Kasner metric is an approximate solution. Then, the metric concerns the excited states of the spacetime. As some recapitulation we can say that the generalized flat anisotropic model (E. Kasner, 1921) and generalized oscillatory approach to a singularity (BKL model, [2]) lead to the reformulated QCD presented within the Everlasting Theory i.e. to the electric/strong charges of the basic and other sham quarks produced by the core of baryons i.e. by the black hole in respect of the strong interactions. There is a similarity of the internal structure of the neutrinos, cores of baryons and the protoworlds. This suggests that the BKL model is applicable to the three types of objects. We can say that the BKL oscillatory model leads to the phase transitions described within the Everlasting Theory. The protoworlds have not electric/strong charge whereas their gravitational ‘charge’ (i.e. the mass) is positive. This is the reason why the Kasner metric does not lead to the formula a + b + c = –1. The Einstein spacetime and the cores of baryons consist of the neutrino-antineutrino pairs so we can say that the BKL model is applicable to infinite space. The neutrinos consist of the binary systems of closed strings and today there is not in existence a spacetime composed of the binary systems of closed strings. We can see that the spaces composed of the binary systems of closed strings are the finite spaces inside the neutrinos. This means that the BKL model we can apply to both finite and infinite spaces. The same conclusion follows from the BKL model. The BKL model shows also that some perturbative action leads to oscillatory mode (the phase transitions) on approaching the singularity but a transition to a new state is more energetic than the initial perturbation. The same we can say about the protoworldneutrino transition. A very small perturbation (a small mass added to the stable protoworld) causes the transition but the involved energy in the transition exceeds very much the range of the very small perturbation. Moreover, such transition forced the exit of the early Universe from the black-hole state. 124 If we assume that circumference of the equator of the torus is equal to 1 then circumference of the circular axis is 2/3, of the internal equator is 1/3 whereas of the point mass is in approximation zero i.e. we obtain following finite series S1: 0, 1/3, 2/3, 1. If we assume that circumference of the circular axis is 1 then we obtain following finite series S2: 0, 1/2, 1, 3/2. When in physics appear the sets containing the elements of the first or second series multiplied by a factor then there is very high probability that such eigenvalues are associated with the internal structure of the core of baryons i.e. it is due to the leaking internal structure of the core i.e. due to the gluons for the strong interactions or due to the photons from the gluonphoton ‘transitions’ for the electromagnetic interactions. For example, the isospin I is defined as follows N = 2I + 1, where I = S2. The magnetic energy of electrons in atoms is directly proportional to the Lande factor g. If we define the g as follows g = 2g’ then the g’ for 2 P1/2 is 1/3, for 2 P3/2 is 2/3 and for 2 S1/2 is 1 i.e. g’ = S1. The Everlasting Theory has the strong foundations which follow from the General Theory of Relativity. I proved that the last theory leads to the tachyons and the phase transitions of the fundamental spacetime. The Everlasting Theory leads also to the invariance of the speed of light and equivalence of the inertial and gravitational masses i.e. to the postulates in the GR. Moreover, I proved that my theory leads to the basic equations applied in the Quantum Physics. The Everlasting Theory is the lacking part of the ultimate theory. References [1] Kasner Edward; “Geometrical Theorems on Einstein’s Cosmological Equations.”; American Journal of Mathematics 43 (4): 217 – 221 (1921) [2] Khalatnikov I. M. and Lifshitz E. M.; “General Cosmological Solution of the Gravitational Equations with a Singularity in Time.”; Physical Review Letters 24 (2): 76 – 79 (1970) 125 Electroweak Interactions, Non-Abelian Gauge Theories and Origin of E = mc 2 The Everlasting Theory leads to the electroweak theory [1] for energies higher than 125 GeV. In the theory of electroweak interaction, the left-handed component of electron’s wave function forms a weak isospin doublet with the electron neutrino. There is also the righthanded singlet associated with the electron-type lepton fields. This leads to following gauge group SU(2)L×U(1)L×U(1)R. On other hand the Everlasting Theory shows that internal helicity of electron is left-handed, of positron is right-handed and of electron-antineutrino, which forms stable structure with electron (see Chapter “Structure of Particles (continuation)”), is left-handed. This suggests that the electroweak theory describes the electron antineutrino interacting with electron-positron pair. The electromagnetic binding energy of an electron-electron-antineutrino pair with the core of baryons is in approximation m = 3.097 MeV (see the description below formula (35)). The mass m is indirectly associated with the transition of the core of baryons from the charged state to the neutral state. The density of the Einstein spacetime is in approximation 40,363 times higher than the weak field i.e. than the point mass in the centre of the core of baryons (see the description below formula (75). This means that the mass of the Einstein spacetime that occupies the same region as the electromagnetic binding energy is MH = 3.097·40,363 = 125 GeV. The last LHC experiments lead to such mass of the Higgs boson. For a pair, i.e. for the mass 2m, we obtain 2MH = 250 GeV. In the mainstream electroweak theory it relates to the vacuum expectation value of the Higgs field. The 250 GeV is close to the vacuum expectation value i.e. the electroweak scale that is the typical energy of processes described by the electroweak theory. From it follows that the electroweak scale is for a pair of the Higgs bosons. Within the mainstream electroweak theory the mass of the W and Z boson are calculated for energy in approximation 90 GeV. For such energy, the fine structure constant applied in the electroweak theory is close to the value calculated within the Everlasting Theory (see formula (87): 1/129.7). Mass of the W boson I calculated in Chapter “Neutrino speed” (80.380 GeV) whereas the calculated mass of the Z boson is 92.0 GeV (see “Reformulated Quantum Chromodynamics”) or 90.4 GeV (see formula (120)). Notice that the mean mass of the Z boson is 91.2 GeV. The new theory of the weak interactions described within the Everlasting Theory shows that appropriate rotary vortex of energy (the H bosons), which appears due to the entanglement of the components of the carrier of such energy (such vortex has left- or right-handed internal helicity), decreases local pressure (so increases local mass density) of the Einstein spacetime in such way that the spacetime components that are inside the region occupied by the carrier of the electromagnetic binding energy start to interact weakly i.e. they are closer one to another. This causes that there appears the visible mass for detectors. Due to the weak interactions, the H boson is the concentration of the local Einstein spacetime. The factor F = 40,363 causes that the energy of the H boson is as follows MH = Fm, (273) where m is the electromagnetic binding energy for the H + H o transition. For the energies lower than the 125 GeV the weak interactions of baryons are associated with the point mass Y = 424.1245 MeV, not with the Z and W bosons. The described within the Everlasting Theory mechanism is not the Higgs mechanism. The energy of the conversion we can refer to as the H boson because it is indirectly associated with the H + H o transition. The arising mass gap follows from the atom-like structure of baryons and properties of the spacetimes. Physical conversion of massless energy into mass via some massless-energy condensation is impossible. The inertial-masses/volumes/pieces-ofspace and their rotational energies are the everlasting attributes of spacetimes. 126 Non-Abelian gauge theories In the Einstein spacetime arise the components of the fractal field. Due to the entanglement, there appear the fractal, closed loops that spontaneously break global symmetry of the Einstein spacetime. The mass density of such loop is the same as the mean mass density of the Einstein spacetime. This means that the massless energy and mass of such loop visible for detectors are equal to zero. In the strong interactions, a binary system of such loops behaves as the electron-electron pair in the ground state of an atom so the zero-spin loops (for detectors) behave as fermions. We can see that such loops behave as ghosts so such loops I will refer to as the ghost loops. There can appear ghost fields composed of loops with different radii. Consider, for example, the strong field of a proton composed of the gluons. State of such field with the electric/strong charge we can describe via a wave function ψn(x). When we add to the gluon field with the charge the ghost field then energy does not change because the ghost field is the part of the ground state of the Einstein spacetime. We can call such transformation the first gauge transformation and write it symbolically as follows E E + 0, where the E is the energy of the field defined by ψn(x) whereas the zero defines energy and mass of the ghost field that ‘see’ the detectors. Such gauge transformation means that we change the phase of the wave function (see Chapter “Foundations of Quantum Physics”). The derivatives of ψn(x) do not transform as ψn(x). To write a gauge-invariant Lagrangian we need derivatives that contain ∂ψn(x) and transform like ψn(x). This forces the introduction of the second gauge transformation. It looks as follows: vector-field vector-field + derivatives-ofghost-field. Symbolically it looks as follows: M M + 0, where the zero defines mass of the ghost field that energy is not equal to zero. This means that the loops are open. For example, the electromagnetic energy 3.097 MeV can be a catalyst for the H boson creation i.e. the low massless energy largely breaks local symmetry of the Einstein spacetime. We can use the vector field to construct a gauge-covariant derivative that transforms like ψn(x). The derivative of the ghost field is the ghost field carrying energy visible for detectors. Its mass is equal to zero. The vector field consists of the ghost loops carrying energy and mass. The ghost field carrying mass consists of the non-zero-spin rotary vortices/loops. A ghost loop carrying energy (it has internal helicity) decreases local pressure in the Einstein spacetime that forces the inflows of the mass from the surrounding Einstein spacetime. The energy of the massless loop is E whereas the total energy/mass of the loop carrying energy and mass is as follows: energy = Mc 2 + E = 2E. This formula leads to the Einstein formula E = mc 2 . We can see the discrepancy between the mass and total energy of such loop. The massless energy carries information. The Everlasting Theory shows that for massless energies the coupling constants are equal to zero. Only carriers carrying mass lead to binding energies. For example, value of the fine structure constant follows from the mass of the electron-positron pairs created by electric charges via photons. The Everlasting Theory shows that the mass of the large loops responsible for the strong interactions inside baryons is 67.5444 MeV and spin associated with this mass is unitary. The ghost loops or ghost loops carrying such energy, not mass, arise on the circular axis and they do not violate the laws of conservation of the spin and charge associated with the torus inside the core of baryons. The ghost loops carrying energy acquire their mass outside the core. However on the circular axis can appear the binary systems of the large loops carrying mass that spin is equal to zero i.e. the neutral pions or π – π + pairs because such structures does not change spin and charge of the core of baryons. Origin of E = mc 2 Electromagnetic energy, i.e. massless, near electric charges can insignificantly increase mass density of the local Einstein spacetime (one part in 40,364 parts) in such way that the spacetime components start to interact weakly. It is the broken symmetry of the local Einstein spacetime. On surface of the volume with broken symmetry appears the surface tension γ 127 because the Einstein spacetime behaves as a gas whereas the volume with the broken symmetry as a liquid. Size of the volume, i.e. the diameter of the volume, is 2λ, where λ is the length of the electromagnetic wave. Due to the surface tension, there appears the positive internal pressure ρ in the liquid and there is satisfied following formula γ = 2λp. (274) The absolute values of the positive internal pressure and the negative pressure created by the rotary vortex of the electromagnetic energy are the same. The rotary vortex has internal helicity due to the spin-rotation of the carriers of the electromagnetic wave. Following formula defines the internal pressure p = ρc 2 /2. (275) From formula (274) and (275) we obtain c 2 = 2πγ/(λρ). (276) On the other hand, for a wave on surface of deep water we obtain [2] v 2 = 2πγ/(λρ), (277) where v is the speed of a water-wave. We can see full analogy for the two different phenomena. This means that formula (276) is correct for electromagnetic waves which amplitude is very small in comparison to the size of the volume with broken symmetry. We know that λ = hc/E so from formula (276) we obtain E = c 2 hcρ/γ = c 2 h/(λc). (278) This formula shows that the Einstein formula E = mc 2 is correct for following condition mcλ = h i.e. for loops which spin is unitary. The masses behave analogically as the mass Y = 424.1245 MeV. This leads to conclusion that the coupling constants for the gravitational interactions we can calculate similarly as the coupling constant for the weak interactions of baryons i.e. a mass is the source and carrier of gravitational interactions αgr = GM 2 /(ch). The mass visible by detectors we can calculate from following formula m = 4π(ρ – ρE)λ 3 /3 = 4fπρEλ 3 /3, (279) where f ≈ 2.478·10 -5 . References [1] Steven Weinberg, The Quantum Theory of Fields, Volume II Modern Applications; pp: 305-317 (1996) [2] Walter Weizel, Lehrbuch der Theoretischen Physik; Volume I, 1; Springer-Verlag, Berlin (1955) or Polish edition, PWN (1958), pp: 348-355, formula (27). 128 Recapitulation and Ultimate Equation There are a few excellent theories wrongly located and/or misinterpreted. The string/M theory is wrongly located and misinterpreted. There are closed strings, however, they are inflexible ideal circles and have other properties (the radius is about 10 -45 m). There are also large loops (where the radius is about 0.465 fm). Whereas the external radius of the torus of a neutrino (in estimation we can treat such a torus as a closed string), i.e. of the weak charge of a neutrino, is equivalent to the string/M theory (the radius is about 10 -35 m). The phase space of neutrino has 26 elements and the neutrino consists of the inflexible closed strings. The phase space of a closed string has 10 elements. A neutrino is not a flexible object. The M-theory becomes the useful theory due to the phase transitions of the Newtonian spacetime. Quantum gravity: The neutrinos are the ‘carriers’ of the gravitational constant. There are only 4 different neutrinos (the electron neutrino and its antineutrino and the muon neutrino and its antineutrino). The graviton could be the rotational energy (its mass is zero) of particle composed of the four different neutrinos in such way that the carrier of graviton is the binary system of binary systems of neutrinos with parallel spins, i.e. spin of carrier of graviton is 2. We will call such carrier the neutrino bi-dipole. Due to the internal structure of the neutrino quadrupole, when it rotates there appear two transverse waves i.e. it behaves as two entangled photons, not a graviton. Gravitational energy is emitted via the flows in the Einstein spacetime composed of the non-rotating-spin neutrino bi-dipoles. Gravitons and gravitational waves are not in existence. The neutrinos, binary systems of neutrinos, bi-dipoles of neutrinos, and so on, produce the gradients in the Newtonian spacetime that is impressed on the Einstein spacetime too. We can describe the gravity via such gradients. When time of an interaction is longer than about 10 -60 s then the Newtonian spacetime looks for interacting particles composed of the Einstein spacetime components as a continuum and we can apply the Einstein equations. Such continuum leads to the symmetries and the laws of conservation too. Since spin of carriers of gravitons is 2 whereas of the neutrinos 1/2 then the quantum gravity leads to conclusion that the neutrinos have only two flavours i.e. there are in existence only four different neutrinos. The tau neutrinos are not in existence. The Kasner solution (1921) in the General Theory of Relativity is the foundations of the Quantum Gravity. Electron consists of the Einstein spacetime components and due to the fundamental/Newtonian spacetime can disappear in one place and appear in another and so on. Such behaviour leads to wave function. We can see that quantum behaviour follows from existence of the two parallel spacetimes. Value of the gravitational constant depends on the internal structure of the neutrinos and inertial mass density of the Newtonian spacetime. This means that Quantum Gravity is associated with the quantum behaviour of the neutrinos. Neutrinos consist of the closed strings so neutrinos can be the quantum particles only in spacetime composed of the closed strings. Such spacetime was in existence only in the era of inflation. During this era, this spacetime decayed into small regions and the finite regions were frozen inside the neutrinos. The Quantum Gravity was valid in the era of inflation only. Today the gravity is classical because due to the lack of spacetime composed of the closed strings there cannot be created the neutrino-antineutrino pairs from such spacetime components similarly as the electron-positron pairs from the Einstein spacetime components. Inflationary theories need reformulation. Due to the flows of finite regions of the Newtonian spacetime (in a cosmic scale) the concentrations and next inflations of tachyon fields are possible. Inflations of tachyon fields are possible also due to collapses of tremendous masses. To destroy gravity is needed inertial mass density higher than in approximation 10 38 kg/m 3 . To destroy the closed strings the inertial mass density should be 129 higher still. This is impossible in our Universe. Inflation can lead to the Protoworld and to the cosmic loop i.e. to the early universe. Supersymmetry is misinterpreted. The Newtonian and Einstein spacetimes are more symmetrical when particles arise as particle-antiparticle pairs (bosons). The electron-positron pair is the superpartner of the electron, the neutrino-antineutrino pair is the superpartner of the neutrino and so on. There is also the fermion-boson supersymmetry that follows from the phase transitions of the imaginary Newtonian spacetime. Inside the stable objects (fermions) appear the loops (bosons). The ratio of the masses of a stable object to the associated loop is 10.77. The postulated exotic particles are not in existence. Unification of fundamental interactions needs revision. Due to the dynamic viscosity of the tachyons, there is in existence the fundamental force. Due to the phase transitions of the Newtonian spacetime, there appear the four known different interactions and the entanglement. There is needed a coherent description of all interactions dependent on mass. We must reformulate the description of the weak and strong interactions especially at low energy. Unification of all interactions via a superforce is impossible. When we destroy internal structure of baryons then the strong interactions disappear. Then the baryons decay into the Einstein spacetime components. Imaginary Newtonian spacetime: Stephen Hawking has written about and analysed imaginary time. I believe that imaginary time exist together with imaginary space i.e. the imaginary Newtonian spacetime composed of structureless tachyons that have a positive inertial mass. Free tachyons are imaginary because they have broken contact with the rest of nature – they are bare particles without an internal structure. For quantum physics, the theories of relativity, inflation and long-distance entanglement require tachyons. Broken symmetries: Origin of the matter-antimatter asymmetry is associated with local asymmetry of the Einstein spacetime. In symmetrical Einstein spacetime, a particle and its antiparticle have the same lifetime. It is inconsistent with the assumptions applied in the today mainstream theories. Higgs mechanism: The mass gaps arise due to the weak interactions of the Einstein spacetime components. They produce negative pressure inside and near them in the Newtonian spacetime (it is the modified Higgs field which is the gravitationally massless field and in approximation scalar field). When the regions with negative pressure partially overlap there appears the attraction between the Einstein spacetime components what increases local mass density of this spacetime. This means that there can appear the mass gap(s). The inertial mass is more fundamental than a pure energy (which mass is equal to zero). The fields having inertial and/or gravitational mass density not equal to zero (for example, the Newtonian spacetime and Einstein spacetime) carry the pure energy. QED: In the Everlasting Theory, the weak mass of bare electron is equal to its electromagnetic mass. The QED describes the creations and annihilations of the electronpositron pairs. The electromagnetic mass of a pair is equal to the bare mass of electron. The renormalization in the QED leads to the radiation mass. It is the product of subtraction of the real mass of electron (this is the parameter in the QED) and the bare mass of electron (the same value for both theories). This means that both theories should lead to the same theoretical results. We can see that within the QED we secretly assume that electromagnetic mass of electron is two times smaller than the bare mass of electron. This is the ‘hocuspocus’. We must change the mainstream picture of electron. We must eliminate the hocuspocus. Then the QED will become the very simple non-perturbative theory of electron described within the Everlasting Theory. We can formulate a new electroweak theory equivalent to the QED. This is possible because the Einstein spacetime and electron carry the electromagnetic and weak interactions. The theoretical results obtained within the QED are calculated only for a few the first orders of the perturbation theory so the theoretical results 130 obtained within the QED must be worse than the calculated for electron within the Everlasting Theory. Electroweak theory is correct for following interval of energies (125 GeV, 18 TeV). Due to the hierarchy-mass error, this theory is incorrect for energies lower than in approximation 125 GeV. Neutrino speed: Generally, the speed of neutrinos is equal to the speed of light but in specific processes there can appear the superluminal neutrinos as, for example, the neutrinos emitted in the supernova SN 1987A explosion. I showed that the neutrino speeds higher than the c are associated with the non-perturbative stadium inside baryons. This is obvious that the coupling constants for the weak interactions of the muons, pions and W bosons differ. This means that on the neutrinos in the weak decays inside the strong fields act different forces. This theory shows that neutrino mass cannot change. Then, from the Newtonian mechanics follows that they should move with different speeds. These speeds should depend on the lifetimes of the particles interacting weakly with the interior of the non-perturbative structure of the baryons. Yang-Mills theory in the non-perturbative regime: Yang-Mills theory is a gauge theory with a non-Abelian symmetry group (given by a Lagrangian) based on the SU(N) group and QCD is a SU(3) Yang-Mills theory. Yang-Mills theory in the non-perturbative regime, i.e. for big value of the running coupling for the strong interactions or at energy scales relevant for describing atomic nuclei, is the unsolved problem. At low energy, confinement has not been theoretically proven. Since the potential vector can be arbitrarily chosen, we must introduce a ghost-unphysical-complex-scalar field. In high-energy regime the alpha_strong is small so we can apply the perturbation theory to prove asymptotic freedom. Most of the difficulties appear at low energy, especially we cannot prove that QCD confines at low energy and we cannot describe phenomena which lead to the mass gap(s) (Higgs mechanism). Moreover, in the infrared limit the beta function is not known. The Everlasting Theory shows that the unsolved problems at low energy follow from the fact that the mainstream theories neglect the internal structure of the bare fermions but of the photons and gluons as well because their carriers, i.e. the binary systems of neutrinos, are the fermion-antifermion pairs. In reality, there is a torus and ball in its centre composed of the carriers of gluons or photons. It is very difficult to describe mathematically the internal structure of the bare fermions in such way to add it to Lagrangian. The perturbative theories as the QED and QCD assume that there is the point bare particle that emits and absorbs respectively the photons and gluons. The photons create the electron-positron pairs whereas the gluons the quark-antiquark pairs. Then they annihilate. There appear the diagrams. Both theories say nothing about the internal structure of the Einstein spacetime that is the scene for these two theories. There is also unsolved problem how point particles can emit and absorb anything. This suggests that in reality the point particles are not the point particles. The Feynman QED has no problem to predict experimental data whereas the QCD does not lead to the exact mass of the up and down quarks so to the properties of particles composed of these quarks also. This must follow from the fact that we neglect the internal structure of the Einstein spacetime and the bare particles. The QED has no problems because all photons in the Einstein spacetime behave the same. This is because the Einstein spacetime has not internal helicity. The internal helicity of the strong field follows from the internal structure of the bare baryons i.e. the core of baryons. When we neglect this structure, there appear the problems in the QCD. The QED and QCD are the perturbative theories whereas the Everlasting Theory is the nonperturbative theory. Why the ultimate theory must contain the non-perturbative and perturbative theories? The ground state of the Einstein spacetime consists of the non-rotatingspin neutrino-antineutrino pairs. The total internal helicity of this state is zero and it consists 131 of particles which spin is unitary. In such spacetime, cannot appear loops having internal helicity i.e. carrying mass. In reality, a unitary-spin loop (the loop state) is the binary system of two entangled half-integral-spin loops (total spin is 2·1/2 = 1) with opposite internal helicity i.e. the resultant internal helicity is zero. Then in such spacetime do not appear turbulences. Such loop can easily transform into a fermion-antifermion pair (the fermion state). Perturbation theories concern the loop states whereas the non-perturbative theories the fermion states. In non-perturbative theory such as the Everlasting Theory, we cannot neglect the internal structure of the bare fermions (there is torus and ball in its centre and virtual pair(s) of fermions outside bare fermion). In the QED the both states, i.e. the loop state and fermion state, are separated in respect of time whereas in the QCD are not. Moreover, the QED and Everlasting Theory are energetically equivalent so within these theories we should obtain the same theoretical results. In baryons, the both states are valid all the time but the non-perturbative fermion state dominates at low energy whereas the loop state dominates at high energy. But it is easier to describe the liquid-like plasma within the fermion state. Since there are the creations from loops and annihilations to loops of the fermion-antifermion pairs so both states (loop and fermion) are energetically equivalent but the bare-fermion state is mathematically much simpler. At the beginning, there was assumed that for the strong interactions are responsible the loops. We can assume that the pairs of particles (i.e. the electron-positron pairs and the quarkantiquark pairs) arise respectively as the photon or gluon loops with spin equal to 1, which transform into the torus-antitorus state. The spin polarization of the tori components leads to the circular and point/ball mass. After the period of spinning, due to the emissions of the surplus neutrino-antineutrino pairs, the masses of the pairs vanish. We can see that the perturbative theories concern the phenomena associated with the processes of emission of the surplus neutrino-antineutrino pairs the Einstein spacetime consists of. Due to the surplus energy there appear processes described by the 1-loop, 2-loop, 3-loop, and so on, diagrams. The increasing number of loops in the succeeding diagrams follows from the fact that the succeeding states must differ. We can see that we neglect the loop/torus state. The loop/torus state is the stable state for the period of spinning of electron and is stable all the time in the cores of baryons. This means that this state we can describe via a non-perturbative theory. This non-perturbative state is very important in the QCD because the loops produced inside torus, which are responsible for the strong interactions, have internal helicity similarly as the gluons exchanged between the sham quark-antiquark pairs produced in the strong field. This leads to the new phenomena inside baryons. Such phenomena are not important in the QED because for electrons the perturbative state (i.e. the phenomena after the disappearances of the masses of the fermion-antifermion pairs) begins just after the period of spinning of electron i.e. after the non-perturbative state. In contrary to the renewable/quantum particles such as electrons, or quark-antiquark pairs in strong fields, inside the core of baryons there is all the time the stable torus and ball in its centre. This means that the both states, i.e. the nonperturbative and perturbative, have been in existence all the time. This is the reason why the QCD is not such precise as the QED. We can apply the perturbative QCD for very high energies or for short-distance interactions. This is because then the strong-weak coupling constant is small (in my theory but also in the QCD). Due to the very stable core inside baryons composed from the Einstein spacetime components and the disappearance of masses of the sham quarks-antiquarks pairs, the perturbative and non-perturbative states exist simultaneously all the time. The number of the disappearances of masses per unit of time increases when energy increases. This means that contrary to the non-perturbative state that is valid for whole energy spectrum, the perturbative state is obligatory at high energies and there should appear big problems at low energy. 132 The field associated with the Yang-Mills is massless i.e. consists of the photons and gluons, i.e. the rotational energies (so massless) of the Einstein spacetime components. Massless gluons transform into massless photons outside the strong field so gluons are not the longdistance particles. This is due to the internal helicities/colours of the strong field and the carriers of gluons and photons i.e. the entangled binary systems of neutrinos. Due to the weak interactions of the neutrino-antineutrino pairs, there appear the balls composed of such pairs. The Einstein spacetime components decrease local pressure in the Newtonian spacetime (it is the modified Higgs field). In very good approximation, the modified Higgs field is the gravitationally massless scalar field. This field is gravitationally massless so we can call it the ghost field. There appears attraction between the Einstein spacetime components when the regions with negative pressure overlap partially. This is the confinement. The local mass densities inside the balls are higher than the mean mass density of the Einstein spacetime. There appear the masses composed of the carriers of gluons. We can see that the particles acquire their mass through symmetry breaking in the fields carrying the massless fields. Due to the coupling constants for the weak interactions, the masses are equal to the masses of the W and Z bosons (in this book there are the very simple calculations of these masses) but these bosons are not responsible for the weak interactions in the low-energy regime. We can see that my theory shows that the Yang-Mills theory has the mass gap(s). There is no proof that QCD confines at low energy. From my description follows that there is not a confinement at very low energy but my QCD ‘confines’ at low energy due to the internal helicities/colours of the strong field and the carriers of gluons and photons. Simply, outside the strong field we can neglect the internal helicities/colours so the gluons behave as photons. We can say that it is due to the properties of the carriers of gluons and photons i.e. due to the mass of the Einstein spacetime components. On the other hand, the finite range of the strong fields follows from the circumference of the large loops that are responsible for the strong interactions. At the beginning of inflation, in the ghost field there were produced the closed strings from the tachyons and next the neutrinos from the binary systems of the inflexible closed strings. The Quantum Gravity concerns the behaviour of the neutrinos in the very short era of inflation. The internal structure of the bare fermions eliminates the singularities and infinities from theory. The above description is the prelude to the non-perturbative M-theory that is the essential part of the Everlasting Theory. The Everlasting Theory shows that there is the atom-like structure of baryons and that this theory is not an alternative theory in relation to the Standard Model. This theory is the fundamental lacking part in the Standard Model and includes Gravity. The non-perturbative M-theory concerns the gluon large loops produced inside the torus in the core of baryons, photon loops, stable states of tori and the balls whereas the perturbative mainstream theories concern the phenomena caused by energy emitted in the annihilations of the fermion-antifermion pairs. Due to the creations of the loops from this energy, there appear the n-loop diagrams. It is the reason why within the perturbative theories we cannot decode the internal structure of the bare fermions. We cannot compare the nonperturbative state with the perturbative state because they are not the descriptions of the same phenomena. The non-perturbative state is the fundamental complement of the perturbative state. For very high energies of collisions, the atom-like structure of baryons is destroyed so there are the weak signals of existence of such structure only for the medium energies. The important conclusions: The gravity is associated with the Newtonian spacetime (the gas composed of tachyons) and with the Einstein spacetime (the gas composed of the nonrotating-spin binary systems of neutrinos). More precisely, the gravitational constant depends 133 on the internal structure of neutrinos and inertial mass density of the Newtonian spacetime. Neutrinos consist of the superluminal binary systems of the closed strings. The closed strings produce the jets in the Newtonian spacetime. The gravitational interactions we can describe as the gradients in the Newtonian spacetime. The gradients are impressed on the Einstein spacetime also. The Everlasting Theory shows that there are 8 different rotating binary systems of neutrinos (the 4 left-handed and 4 right-handed) and each has mass in approximation 6.7·10 -67 kg. Due to the lack of the spacetime composed of the binary systems of the closed strings, the neutrinos cannot change their mass. This means that neutrinos, which appear in different weak decays inside the strong field, should have different speeds. The binary systems of neutrinos carry the massless photons and gluons – they are the rotational energies of the Einstein spacetime components. The internal structure of the baryons causes that the internal structure of the Einstein spacetime components (the 8 different rotating components) are disclosed. The entangled gluons transform inside the core of baryons into the loops. When a loop overlaps with the circular axis of baryons (the large loop) then its mass is 67.5444 MeV. Such large loops are responsible for the strong interactions of mesons and the running coupling for low energy is 1. The binary systems of such loops, i.e. the neutral pions, are responsible for the strong interactions of nucleons. Such loops are responsible for the strong interactions of baryons and the running coupling for low energy is 14.4. Acceleration of a nucleon causes that its mass increases. This follows from the formula for spin (mvspinr = h/2) for the stable fermions. In the same time the mass of the loops decreases so the running coupling also. This follows from the formula for spin (ΔE·Tlifetime = h, where the lifetime is inversely proportional to the spin speed vspin) for the virtual large loops responsible for the strong interactions. There is asymptote for high energies equal to 0.1139. Range of the gluon loops is equal to the circumference of loop, i.e. 2.92 fm, and such is origin of the ‘confinement’ of the gluon loops responsible for the strong interactions. What is mechanism of the disclosure of the properties of the Einstein spacetime inside the baryons? The torus inside core of baryons has internal helicity so the gluon loops emitted by the core adopt this helicity. The components of carrier of a not entangled gluon (i.e. the two entangled neutrinos) also have the internal helicities. The three internal helicities of a not entangled carrier of gluon lead to the 8 different gluons. We can say that the internal structure of the Einstein spacetime and the core of baryons are responsible for the transformation of the photons into gluons in distances smaller than 2.92 fm from centre of nucleons. In centre of the core of baryons arises sphere inside which the Einstein spacetime thickens. Radius of this sphere is 0.871·10 -17 m whereas mass (the mass gap) is 424.124 MeV. This thickened Einstein spacetime is responsible for the weak interactions of the baryons. Mass density of the thickened volumes is only by 1/40,363 higher than the Einstein spacetime. The four interactions associated with the Einstein spacetime components we can describe by means of the Riemann metric and Einstein equations applied in the General Theory of Relativity written for phase space containing more elements to have room for all types of forces. In reality, there are not in existence the higher dimensions. The numbers 10 and 26 are the numbers of elements of the phase spaces respectively for the single or binary systems of closed strings and the single or binary systems of neutrinos. Phase space contains elements describing position, shape and motions of a particle. The Everlasting Theory and Special Number Theory presented together with the Everlasting Theory show origin of the magic numbers which appear in the string/M theory i.e. the 8(10) and 24(26). Due to the ideas presented in the Special Number Theory, these magic numbers can appear in different mathematical expressions but nature realizes only one. It looks similar as the theory of great numbers – not all correlations have physical meaning. Properties of the closed strings lead to the phase transitions so the mass-energy part in the General Relativity is dual. The greater tori consist of smaller tori, and so on. There arise the neutrinos, cores of baryons and the protoworlds. Outside the core of baryons is obligatory the 134 Titius-Bode law for the strong interactions. Einstein tried to change the mass-energy part in his equations to describe internal structure of particles but it failed. We can see that we can generalize the Einstein equations applied in the General Relativity. The enlarged Riemann metrics includes the gravity and the Yang-Mills fields which leads to the photons, gluons and the regions with thickened Einstein spacetime (such regions have mass because the Einstein spacetime has mass density not equal to zero) responsible for the weak interactions of baryons. The weak interactions between the thickened regions of the Einstein spacetime are possible when their surfaces are in distance equal to or shorter than 3482.87 times the external radius of a neutrino. We can see that the weak field practically overlaps with the thickened regions. This means that the weak interactions are the short-distance interactions. Exchanged small loops composed of the binary systems of closed strings are responsible for the entanglement of particles (for the long-distance entanglement also). Due to the symmetrical decays of the virtual bosons in the strong field, outside of the core of baryons is obligatory the Titius-Bode law for the strong interactions. Due to the new theory of weak interactions, to calculate the radiation masses, we can apply two dual methods i.e. the Feynman diagrams or the non-perturbative theory described within the Everlasting Theory. The last theory is much simpler and gives better results. Due to the properties of the Einstein’s spacetime is possible quantization of the Yang-Mills fields. A torus of the electron is the entangled and specifically polarized zero-energy photon. The electric charge-anticharge pairs arise from the loops composed of the Einstein spacetime components and radii of the loops are equal to the radii of the equators of the tori/electric-charges. The core of protons and torus of positrons have the same electric charge but there is place for the sham quark-antiquark pairs. This theory leads to masses of the quarks and shows that the other properties of the quarks are different. The Yang-Mills theory (which leads to the gluons too) is correct whereas the theory of quarks is correct only partially. Because the quark theory is partially incorrect, within the QCD we cannot calculate exact rest masses of the up and down quarks. The E. Kasner solution for the flat anisotropic model (1921) in the General Theory of Relativity leads to the numbers characteristic for the bare fermions, especially for the tori. On the other hand, the internal structure of the bare fermions leads to the known interactions and the quantum behaviour of the electron. Electron consists of the Einstein spacetime components and due to the fundamental/Newtonian spacetime can disappear in one place and appear in another and so on. Such behaviour leads to wave function. We can see that quantum behaviour follows from existence of the two parallel spacetimes. Value of the gravitational constant depends on the internal structure of the neutrinos and inertial mass density of the Newtonian spacetime. This means that Quantum Gravity is associated with the quantum behaviour of the neutrinos. Neutrinos consist of the binary systems of the closed strings so neutrinos can be the quantum particles only in spacetime composed of the binary systems of the closed strings. Such spacetime was in existence only in the era of inflation. During this era, this spacetime decayed into small regions and today the binary systems of the closed strings are inside the neutrinos. The Quantum Gravity was valid in the era of inflation only. Today the gravity is classical because due to the lack of spacetime composed of the closed strings there cannot be created the neutrino-antineutrino pairs from such spacetime components similarly as the electron-positron pairs from the Einstein spacetime components. The Kasner solution and the scales for the charges (weak, electric and strong) in the generalized Kasner solution and the BKL oscillatory model, lead to the phase transitions of the fundamental spacetime and to the Protoworldneutrino transition that caused the exit of the early Universe from the black-hole state. The phase transitions are the foundations of the modified/useful string/M theory. There is also the ultimate equation that combines the masses of sources of all types of interactions. The Kasner solution leads to the new cosmology as well. We can say also that the Kasner solution is the foundations of the Quantum Theory of 135 Gravity and Quantum Theory of Fields without singularities and infinities. The Kasner solution is asymmetric in time because there appear the stable structures. The reduction of the state vectors is asymmetric in time as well. The Kasner solution shows that the theory of gravity is the more fundamental theory than the Quantum Theory of Fields. This postulated Roger Penrose. The ultimate equation: We can notice that the range of the weak interactions of the neutrinos Rweak(neutrino) = 3482.87rneutrino divided by the Compton length λ of the bare electron (see formula (17)) is equal to the Reynolds number NR for maximum dense Newtonian spacetime (see formula (1)). Such state of spacetime is inside and on surface of the closed strings from which the neutrinos consist of. Applying the above formula and formulae (1)- (49), especially (6), (13)-(16) and (47)-(49), we can write the ultimate equation which ties the properties of the pieces of space i.e. tachyons with the all masses/sources responsible for the all types of interactions. To simplify the ultimate equation we assume that the ratio of the mean distance between the neutrino-antineutrino pairs in the point mass to the distance in the Einstein spacetime is equal to 1. In reality, the ratio is (ρE/(ρE + ρpoint(proton))) 1/3 = 0.9999917, where ρE is the mass density of the Einstein spacetime. This means that there are the five significant digits. The ultimate equation looks as follows 4πmtachyonρ/3η = (2mclosed-string/h) 2 (2mneutrino/ρE) 1/3 (mbare(electron)/2)c(X/H + ) 1/2 . (280) The 4π/3 on the left side of the ultimate equation shows that the tachyons are the balls. The mean mass of tachyons is the mean mass of the source of the fundamental interaction that follows from the direct collisions of tachyons and their dynamic viscosity. The ρ is the mass density of the pieces of space i.e. the tachyons (it is not the inertial mass density of the Newtonian spacetime). The η is the dynamic viscosity of the pieces of space i.e. of the tachyons. The two masses of the binary systems of closed strings (their total spin is 2·h/2 = h) on the right side of the ultimate equation are the source of the entanglement. The two masses of neutrinos, i.e. the neutrino-antineutrino pair, are the source of the gravitational field. The mass of single neutrino is the smallest gravitational mass. In the equation the smallest gravitational mass is multiplied by 2 that means the non-rotating-spin neutrino-antineutrino pairs (the 2) are the components of the ground state of the Einstein spacetime (the ρE in the denominator). The half of the mass of the bare electron is the mass of the electric charge i.e. the mass of the source of the electromagnetic interaction. The c is the speed of photons and gluons. The transitions of the carriers of the photons and gluons, i.e. of the neutrino-antineutrino pairs, from the electromagnetic field to the strong field force the photongluon transitions. The X is the mass of the torus inside the core of baryons in which the large loops arise (they are responsible for the strong interactions). The X is the mass of the strong charge/mass. Outside the strong field, due to the gluonphoton transitions, it behaves as electric charge of positron. The H + = X + Y – binding-energy, where Y is the point mass of the core of baryons. The Y is the source of the weak interaction in the baryons in the low-energy regime. It is the relativistic object so it can produce the W and Z bosons also. The ratio X/H + appeared in the formula (82) that defines the mass of the source of the strong-weak interactions for colliding protons. The calculations lead to the running coupling for the strong-weak interactions. We can see that due to the phase transitions of the Newtonian spacetime as the first appears the Planck constant, next gravitational constant associated with the mass of neutrino and next the electric charge and speed c. 136 To give possibility for a quick verification of correctness of the ultimate equation, I write once more the needed values. I do not write the units. They are in the System International except the X and H + (in MeV). mtachyon = 3.752673·10 -107 η = 1.87516465·10 138 ρ = 8.32192436·10 85 mclosed-string = 2.3400784·10 -87 h = 1.054571548·10 -34 mneutrino = 3.3349306·10 -67 ρE = 1.10220055·10 28 c = 2.99792458·10 8 mbare(electron) = 9.09883020·10 -31 X = 318.295537 H + = 727.440123 The left and right side of the ultimate equation is 6.9761·10 -159 - we know that we can write the five significant digits only. How can we verify my theory? My theory identifies where mainstream theories are inconsistent with experimental data: 1. Neutrinos produced in specific processes can move with speeds higher than photons and gluons. This follows from the atom-like structure of baryons. 2. There should be an asymptote for the running coupling for strong interactions of the colliding nucleons – the value of it equals 0.1139. This follows from the packing to maximum the cores of baryons. 3. There should be the upper limit for energy of relativistic proton about 18 TeV. This follows from the internal structure of the core of baryons. 4. There should be weak signal of existence type W boson carrying mass about 17 TeV. This follows from the internal structure of the Einstein spacetime and the core of baryons. 5. There should be in existence stable binary systems of neutrinos (spin=1), i.e. the carriers of photons and gluons, and the non-rotating-spin binary systems of binary systems of neutrinos (spin=2) i.e. the carriers of gravitational energy. 6. There should not be in existence gravitons and gravitational waves. This follows from the properties of the two spacetimes. 7. There are the weak signals of existence of new bosons that disappear for high energies. This follows from the fact that due to the high-energy collisions the Titius-Bode orbits are destroyed. Turning points in the formulation of the ultimate theory: At the beginning, I noticed that the following formula describes how to calculate the mass of a hyperon: m(MeV)=939+176n+26d, where n=0,1,2,3 and d=0,1,3,7. For a nucleon it is n=0 and d=0 which gives 939 MeV. For lambda n=1 and d=0 which gives 1115 MeV. For sigma n=1 and d=3 which gives 1193 MeV. For ksi n=2 and d=1 which gives 1317 MeV. For omega n=3 and d=7 which gives 1649 MeV. I later noticed that the distances of the mass between the resonances and distances of the mass between the resonances and hyperons is approximately 200 MeV, 300 MeV, 400 MeV, and 700 MeV. This was in 1976. In 1985, I grasped that in order to obtain positive theoretical results for hadrons, we should assume that outside the core of a nucleon is in force the Titius-Bode law for strong interactions. On orbits are relativistic pions. The year 1997 was the most productive for me because I described the phase transitions of the Newtonian spacetime, the four-neutrino symmetry also leading to the distribution of galaxies that is visible today, and I also described the fundamental phenomena associated with the cosmology of the Universe. In this eventful year, I practically formulated new particle physics and new cosmology. 137 The final recapitulation The Everlasting Theory is the lacking part of the ultimate theory and is free from singularities and infinities. There are the two long-distance interactions. This suggests that there are two parallel spacetimes. To explain the inflation, long-distance entanglement, cohesion of wave function and constancy of the speed of light, we need the fundamental spacetime composed of tachyons. The gas composed of tachyons is the fundamental/Newtonian spacetime whereas the gas composed of the neutrino-antineutrino pairs is the Einstein spacetime. There are the two basic phenomena. The saturation of interactions of the tachyons leads to the phase transitions of the Newtonian spacetime. The first phase transition leads to the closed strings the neutrinos consist of, the second leads to the Einstein spacetime, third to the core of baryons whereas the fourth to the cosmic object, the Protoworld, after the era of inflation (there appears the new cosmology). The second phenomenon, i.e. the symmetrical decays of the bosons in very high temperatures, leads to the Titius-Bode law for the strong interactions and to the Titius-Bode law for the gravitational black holes. There appears the atom-like structure of baryons. On base of these two phenomena and the 7 parameters only, I calculated several hundred basic theoretical results consistent or very close to experimental data. I calculated the basic physical constants as well. The nature on its lowest levels once again behaves classically. The bare fermions consist of torus and ball in its centre. The mainstream theories neglect the internal structure of bare fermions. The core of baryons is the black hole in respect of the strong interactions whereas the ball in its centre is the black hole in respect of the weak interactions. Their masses are quantized so they emit the surplus energy. The same concerns the gravitational black holes. Taking into consideration these facts we can formulate theory simpler than the Newtonian mechanics. The quantum behaviour follows from existence of the two parallel spacetimes. The Kasner solution for the flat anisotropic model (1921) in the General Theory of Relativity leads to the numbers characteristic for the bare fermions. Quantum Gravity is associated with the quantum behaviour of the neutrinos. Neutrinos consist of the binary systems of the closed strings so neutrinos can be the quantum particles only in spacetime composed of the binary systems of the closed strings. Such spacetime was in existence only in the era of inflation. Today the gravity is classical. The Kasner solution and the scales for the charges (weak, electric and strong) in the generalized Kasner solution and the BKL oscillatory model, lead to the phase transitions of the fundamental spacetime and to the transition of the Protoworld into neutrino that caused the exit of the early Universe from the black-hole state. The Kasner solution is the foundations of the Quantum Gravity and the Quantum Theory of Fields. The phase transitions are the foundations of the modified/useful string/M theory. There is also the ultimate equation that combines the masses of sources of all types of interactions. The ultimate theory must contain non-perturbative and perturbative theories. The ground state of the Einstein spacetime consists of the non-rotating-spin neutrino-antineutrino pairs. The total helicity of this state is zero and it consists of particles which spin is unitary. In such spacetime cannot appear loops which have helicity so mass as well. In reality, a unitary-spin loop (the loop state) is the binary system of two entangled half-integral-spin loops with opposite helicities i.e. the resultant helicity is zero. In such spacetime do not appear turbulences. Such loop can easily transform into a fermion-antifermion pair (the fermion state). Perturbation theories concern the loop states whereas the non-perturbative theories the fermion states so we cannot neglect the structure of bare fermions. 138 Definitions Acceleration of expansion of the Universe: Due to the decays of the superphotons, the Universe significantly flared up two times i.e. about 13.2 and 5.7 billion years ago. From the second flare up follows that an acceleration of expansion of the Universe is an illusion. The applied formula for the redshift calculated on the base of the observed redshift is wrong and leads to illusion that the expansion of our Universe accelerates. ‘Antigravity’: In thickened regions of the Einstein spacetime on masses act repulsive forces. Antiparallel jets: Antiparallel jets produce binary loops when loops in a binary system have different internal helicity and parallel spins which directions overlap. Such situation is, for example, in the binary systems of the closed strings and the cores of protogalaxies. Due to the internal helicities, a binary loop sucks up spacetime from plane perpendicular to the spin and emits it as the jets along the direction of the spin. In the protogalaxies, due to the fluxes in spacetime, we should observe the capture of matter by the fluxes from the accretion discs. Background: The volume filled with internally structureless tachyons (Newtonian spacetime is the background for gravitational interactions), non-rotating-spin binary systems of neutrinos (excited states of the Einstein spacetime are responsible for the electromagnetic, weak and interactions), and virtual particle-antiparticle pairs (virtual particles do not change the mean mass density of background). Baryons: In their centre is the core composed of torus (it is the electric charge) and point mass. The point mass is responsible for the weak interactions. On circular axis inside the torus are produced the large loops responsible for the strong interactions. Outside of the core is obligatory the Titius-Bode law for the strong interactions. On the orbits are one or more pions. Big bang theory: An enormous region of the Newtonian spacetime can thicken and then expand with superluminal speeds (inflation). Such events happen every time but due to the superluminal speeds, probability that this will happen near our Universe is practically equal to zero. During such inflation, arise the closed strings and the binary systems of neutrinos the Einstein spacetime consists of. The speed of the entangled neutrino-antineutrino pairs (the c) stops the inflation Next there appear the neutrons. There the Protoworld and the cosmic loop i.e. the early universe can appear. However, the ‘soft’ big bangs are associated with the explosions of universes that have strictly determined mass (they are the cosmic loops composed of the neutron black holes) – such explosions are due to the Protoworldneutrino transition. During such transition the thickened Einstein spacetime, i.e. the dark energy, appears. The dark energy is composed of the surplus non-rotating-spin binary systems of neutrinos. The inflows of the dark energy into the cosmic loop cause its exit from the blackhole state. We can see that there are the two main stages associated with the new big bang theory i.e. there are the inflationary stages associated with the Newtonian spacetime and there are the protoworld stages leading to the ‘soft’ big bangs of the cosmic loops. Black holes: The Everlasting Theory shows that the cores of protons are the black holes with respect to the strong interactions (their mass is 727.44 MeV). The thickened regions of the Einstein spacetime (this consists of the non-rotating-spin binary systems of neutrinos) in the centers of the cores of baryons are the black holes with respect to the weak interactions (their mass is 424.12 MeV). The point mass of the muons also are the black holes with respect to the weak interactions but in contrary to the point mass of baryons there are the two energetic neutrinos and each has energy about 17.7 MeV. The greatest neutron stars are the gravitational black holes. Their mass is about 24.8 times greater than the mass of the sun. The magnetars have mass from 25 to 50 times greater than the mass of the sun. In their centers are the biggest neutron stars. The greater stars and the bigger black holes consist of the magnetars. Due to the new theory of the weak interactions, inside our Universe, the cores of 139 nucleons cannot collapse. The black holes are everywhere. Their masses are quantized so they emit the surplus energy. Broken symmetry: In symmetrical fields can appear pairs composed of rotary vortices. The components of a pair have different internal helicity. This means that inside each component is broken symmetry. Inside a rotary vortex can appear electrically charged pairs in such way that the components of a pair have different masses. This means that symmetry of a field is broken two times. There can be also in existence regions in the Einstein spacetime containing different number of different neutrinos – it breaks symmetry also. Closed strings: On surfaces of regions with tachyons packed to the maximum closed strings arise (the radius is approximately 0.95·10 -45 m, not approximately 10 -35 m as in the string/M theory). The natural speed of a closed string in the Newtonian spacetime is approximately 2.4·10 59 times higher than the speed of light in spacetimes. The spin speed is practically equal to the mean linear speed of tachyons. Closed string consists of K 2 tachyons (K=0.79·10 10 ). Due to the mean linear and angular speeds of tachyons in the Newtonian spacetime only the identical right- or left-handed closed strings appear. The maximum thickness of a closed string is equal to the diameter of a tachyon. Closed string is stable due to its shape which creates negative pressure inside it. Spin of closed strings is half-integral. Each closed string produce one collimated jet in the Newtonian spacetime. Because resultant internal helicity of spacetime must be equal to zero, the closed strings arise as the closed string-antistring pairs. To describe the position, shape and motions of a closed string we need three coordinates, two radii, one spin speed, one angular speed associated with the internal helicity and time associated with the linear speed. In order to describe the rotation of a spin vector we additionally need two angular speeds. This means that we need ten numbers to describe a closed string. In order to describe a string-antistring pair we need a phase space containing the ten elements also because the distance between the components in a pair follows from the thickness of a closed string. Coherent mathematics: We cannot formulate coherent mathematics on the base of the points without size because such points (even an infinite number of them) do not lead to axes, areas or volumes that have sizes that are not equal to zero. Coherent physics cannot also start from sizeless points. True abstract mathematics also does not lead to the observed nature. The ultimate theory should begin from some physical objects. Colours: They are the three internal helicities of the carriers of gluons (gluons are the 3coloured particles) and one internal helicity of loops and tori in the strong field (they are the 1-coloured particles). Cosmic loop: The loop inside the torus of the Protoworld composed of the neutron black holes. Cosmic-ray particles: The assumption that the ground state of the Einstein spacetime is the field composed of the non-rotating-spin binary systems of neutrinos leads to new particle physics and new cosmology. When a particle more massive than the binary system of neutrinos accelerates then emits more and more energy. For example, the Everlasting Theory predicts that at energy above approximately 18 TeV per nucleon, nucleon emits the surplus energy. Then, why can we detect the ultra-energetic cosmic rays? Such cosmic rays are the very energetic neutrinos and binary systems of neutrinos. The detected several cosmic rays above the GZK limit arose at the beginning of the ‘soft’ big bang in the protuberances of the Einstein spacetime and were emitted by the quasars with the redshift higher than zob=1. Dark energy: Finite fields composed of the surplus weak dipoles. Dark matter: The photon galaxies (i.e. the entangled photons – the entanglement is due to the exchanges of the binary systems of the closed strings) coupling the cosmic objects inside a galaxy cause an illusion that a dark matter exist. 140 The dark matter consists also of the iron-nickel lumps produced in explosions of the big stars just after the beginning of the ‘soft’ big bang. Because the dark matter arose just after the beginning of the ‘soft’ big bang then temperature of the iron-nickel lumps is the same as the CMB radiation. This means that it is very difficult to detect the dark matter. Today, most of the dark matter is in the halos of galaxies. The ratio (10.77) of the mass of the core of nucleons (727.44 MeV) to the mass of the large loop (67.5444 MeV) is almost equal to the ratio (10.65) of abundance of iron (90.64%) to abundance of nickel (8.51%) in the lumps of the dark matter. Possible it has some deeper meaning. Are the cores and the large loops the catalysts in production of iron and nickel? The photon galaxies interact with the dark matter i.e. the iron-nickel lumps. This leads to conclusion that the dark matter should behave a little as a gas and a little as a solid body. Most often, the planes of the photon galaxies are perpendicular to the magnetic axes of the massive galaxies so due to the Titius-Bode law for the gravitational interactions each massive galaxy should contain a few parallel thin lenses each composed of the dark matter and the photon galaxies. They should be parallel to the plane of disc composed of the visible matter. DNA: The precursors of the Deoxyribonucleic Acids (the DNAs) arose inside the cosmic loop composed of the neutron black holes i.e. there dominated the strong interactions. With the strong/electric charge of the torus inside the core of baryons is associated the ternary symmetry. With each element of a ternary system a neutrino can interact weakly. The three neutrinos associated with a torus are entangled. Since in a ternary system the components can be in different states so there can arise trios composed of the same neutrinos also. The trios are the codons in the precursors of the DNAs. Due to the superphotons, the baryons were entangled too. Due to the beta decays, there were produced the helices composed of the proton-electron pairs and with each proton-electron pair was associated one codon composed of neutrinos. The 2 different electric charges are the analogs to the deoxyribose and the phosphoric acid. The 4 different neutrinos are the analogs to the four different bases i.e. A, C, G and T. In atoms, there are the two spin states of an electron in the ground state (up and 141 down). This leads to the two threads in the helices whereas the Pauli Exclusion Principle is responsible for creation of the helices i.e. each next proton in a helix must have different direction of spin. The electroweak interactions of the precursors of the DNAs lead to the molecular DNAs. We can see that precursors of the DNAs look as superphoton-like structure. This means that the superphotons could be the catalysts. There were about 10 78 superphotons. To create one our entire genome is a need about 10 36 superphotons. This means that human life should be usual. The 4 bases in DNA, the 8 gluons and the spin of the carriers of gravitational energy (the spin is 2 = 4·1/2) lead to the two families of neutrinos only. The ‘oscillations’ of the neutrinos (in reality the ‘oscillations’ are the exchanges; neutrinos are the very stable particles) lead to the illusion that there are the three families of neutrinos. Due to the ternary symmetry for the strong/electric charges of nucleons and the pairing of the atoms in the Earth atmosphere (O2, N2) and the electrons in the ground states of atoms, there appears the six-fold axis of symmetry (3+3=6) typical for the flakes of snow. Einstein spacetime: The field composed of non-rotating-spin binary systems of neutrinos. The binary systems of neutrinos are weak dipoles that are composed of two opposite weak charges. The properties of a weak charge depend on the structure of the torus of a neutrino. It appears as a miniature of electric charge of proton. Electromagnetic interaction: Electric charges polarize the Einstein spacetime. In the Einstein spacetime arise the virtual electron-positron pairs. Their annihilation creates divergent beams in the Einstein spacetime. Such phenomena create negative pressure in the Einstein spacetime. In region between the opposite electric charges, the density of the virtual electron-positron pairs is higher than in other parts. In regions between the same electric charges, such density is lower. Electric charges can also interact due to the exchange of the photons since photons also produce real and virtual electron-positron pairs. Electron: Electric charge of electron arises following the entanglement and polarisation of the Einstein spacetime components i.e. the neutrino-antineutrino pairs, therefore, the torus of an electron forms part of the Einstein spacetime. Axes of these dipoles are perpendicular to the surface of a torus and all senses of spins of the dipoles point the inside of the torus (charge) or outside (anticharge). The polarized binary systems of neutrinos cross the circular axis and centre of a torus so they make half-turns in these places - there two masses appear i.e. the circular mass and point mass. This is because such turns decrease the pressure in the Einstein spacetime that causes new binary systems of neutrinos to flow into a bare electron (absorption). On the circular axis of electron, there is a whole charge and only half mass of bare electron. After the time of spinning (it is the circumference of the equator of the torus divided by the c), due to the properties of the Newtonian spacetime, the electric charge disappears in one place and appears in another and so on. The disappearances cause that the mass of electron vanishes (emission of the surplus neutrino-antineutrino pairs). We can see that the distributions of charge and mass are different and for very short time that follows from the mean speed of the tachyons, the electric charge and mass of electron can be separated spatially. But it is always true that half of the bare mass of electron is associated with electric charge. The spin polarization of the components of the electric charge of an electron is an analog to gradients of temperature in a tropical cyclone – they are the effective causes of the flows/winds in the spacetime/atmosphere that increase the mass density (so also mass) of the spacetime/atmosphere inside the cyclone/bare-electron (the outcome). Elementary charge: The torus of an electron and the torus of proton are composed of the same number of binary systems of neutrinos, therefore, both tori create the same amount of polarized lines of electric forces in the Einstein spacetime. This means that the densities of the created lines are the same also. In the torus of proton the mean distance between the binary 142 systems of neutrinos is approximately 554.3 times smaller than that found in the torus of an electron. Furthermore, virtual electron-positron pairs arise near the bare electron. Elementary photon: It is rotational energy of a neutrino-antineutrino pair. After the period of inflation, the carriers of photons, i.e. the neutrino-antineutrino pairs, behave classically whereas elementary photons, i.e. the massless rotational energies, behave as quantum particles i.e. a massless rotational energy disappears in one place and appears in another and so on – it Entangled particles: The long-distance entanglement of neutrinos is due to the exchanges of the superluminal quanta composed of the binary systems of the closed strings emitted by the neutrinos. Evaporation of neutron black holes: The neutron black holes arose after the period of the inflation but before the beginning of the ‘soft’ big bang. The massive galaxies arose due to the evaporation of the neutron black holes the protogalaxies consisted of. The bigger cosmic structures composed of the protogalaxies arose before the ‘soft’ big bang also. The evaporation was due to the inflows of the dark energy. The dark energy arose due to the collapse of the Protoworld before the ‘soft’ big bang. The dark energy is the thickened Einstein spacetime composed of the non-rotating-spin binary systems of neutrinos. To detect such binary systems we should measure the mass with accuracy about 10 -67 kg. Today it is impossible. The dark matter consists of the iron-nickel lumps entangled via the binary systems of the closed strings. The dark matter arose in the era of the evaporation of the protogalaxies. The dark matter is in the halos of the galaxies and its temperature is the same as the CMB. Due to the temperature is very difficult to detect it. The small protogalaxies arose due to the explosions of the big protogalaxies during the era of the evaporation of the protogalaxies. It was due to the inflows of the dark energy. In surroundings of the evaporating protogalaxies arose stars so there should be the groups of the first stars. We should not observe their regular distribution. Fine-structure constant: Its value changed in the protuberances in the Einstein spacetime appearing at the beginning of the ‘soft’ big bang. The fine-structure constant is in proportion to the mass density of the Einstein spacetime to the power of five third. We observe such changes for the quasars. Four-neutrino symmetry: There are four different neutrinos (two neutrinos and two antineutrinos). Binary system composed of the binary systems of neutrinos, when consists of four different neutrinos, can have total spin and total internal helicity equal to zero. Entanglements of such objects lead to cosmic structures but solve also many other problems. Fractal: An object composed of solitons having different sizes. Fractal field: A field composed of threads consisting of binary systems of neutrinos in such a way that the spins are tangent to the thread. Gluon-photon transitions: The neutrino-antineutrino pairs are the carriers of the elementary gluons and photons. The pairs have the three internal helicities (the three colours) but their internal structure is disclosed in the strong field only because this field in contrary to the electromagnetic field has internal helicity due to the properties of the strong charge/mass. Gravitational interaction: All particles composed of neutrinos interact gravitationally. The neutrinos transform the chaotic motions of free tachyons into divergently moving tachyons. This means that near and near a bare particle pressure in the Newtonian spacetime decreases. Such is the origin of gravitational attraction. This gradient is impressed on the Einstein spacetime which means that Einstein gravity appears. Gravitons: The graviton could be the rotational energy (its mass is zero) of particle composed of the four different neutrinos in such way that the carrier of graviton is the binary system of binary systems of neutrinos with parallel spins, i.e. spin of carrier of graviton is 2. We will call such carrier the neutrino bi-dipoles. Due to the internal structure of rotating 143 neutrino quadrupole, there appear two transverse waves. This means that rotating neutrino bidipole behaves as two entangled photons, not as graviton. Gravitational energy is emitted via the flows in the Einstein spacetime composed of the non-rotating-spin neutrino bi-dipoles. Gravitons and gravitational waves are not in existence. Hadronization and deconfinement in the Everlasting Theory: The confinement described within the Everlasting Theory leads to the gluon balls. Due to the atom-like structure of baryons, the gluon balls transform into the sham quark-antiquark pairs i.e. into vortexantivortex pairs. We can see that between the quarks can be exchanged the gluon balls. The exchanged gluon balls, due to the confinement, produce the spokes i.e. the physical traces in the Einstein spacetime. Action does not depend on length of a spoke but it is obvious that must be proportional to area of cross-section of a spoke. The spokes are the regions in the Einstein spacetime in which the mass density of the Einstein spacetime is a little higher than the mean. Such regions produce turbulences in the Einstein spacetime. The nature tries to eliminate the turbulences. How it can do it? Assume that in a room are many chaotically running cats so there is many collisions so there arise turbulences i.e. regions in which number densities of cats are different than the mean density so there appear the not planned by cats trajectories as well. Pressure inside the room depends on number of the cat collisions per unit of time. What the cats should do to reduce maximally the pressure? They should run with the same spin speed in a cat vortex. The same does the nature to eliminate the turbulence. From the spokes arise vortices. To conserve symmetry there arise the vortex-antivortex pairs. But then the physical traces produced in the Einstein spacetime look as tubes. This means that now action is proportional to perimeter of the tubes. Emphasize that gluon balls produce confined spokes and action is proportional to area of the cross-sections of the spokes whereas vortex-antivortex pairs produce tubes and action is proportional to perimeters of the tubes. On the other hand, we know that in gauge theory the confining phase (for example, it can be the hadronic phase) is defined by the action of the Wilson loop. It is the trace/path in spacetime. In a non-confining theory, the action is proportional to perimeter of the loop (tubes) whereas in a confining theory, the action is proportional to area of the loop (spokes). We can see that the transition of the spokes into the quark-antiquark pairs (the hadronization) causes that confined quarks due to the spokes become the free quark-antiquark pairs (they are the mesons or the entangled baryon-antibaryon pairs). Due to the atom-like structure of baryons, the emitted pairs simulate the known hadrons. We can see that a hadron jet “observed” by detectors consists of the tube-antitube pairs. Similar confinement can appear in electromagnetic field but because internal helicity of this field is equal to zero so such confinement is colorless. The quark-gluon plasma mostly consists of the cores of baryons, precisely of the core-anticore pairs. The cores-anticore pairs are tangent so there is very small volume between the pairs in which the quark-antiquark pairs and the spokes can be created. We can say that the not numerous spokes at once transform into the known hadrons. This looks as a deconfinement. Higgs field, Higgs boson, Higgs mechanism, hierarchy problem, confinement and mass gap(s) in the Everlasting Theory: In the Everlasting Theory the modified Higgs field is the fundamental/Newtonian spacetime composed of the tachyons. Due to the tremendous pressure it behaves as liquid. We need the spacetime composed of tachyons to explain the inflation, long-distance entanglement, cohesion of wave functions and constancy of the speed of light. Smoothness/symmetry of the modified Higgs field is broken inside and nearly the Einstein spacetime components i.e. the binary systems of neutrinos. The inflexible binary systems of the closed strings the binary systems of neutrinos consist of, transform the chaotic motions of the tachyons in the modified Higgs field into the divergent jets. It decreases the local pressure in the modified Higgs field inside and nearly the Einstein spacetime components. When the regions of the negative pressure overlap at least partially then there appears the confinement 144 which breaks smoothness/symmetry of the Einstein spacetime. It is the broken symmetry between gravitational force and weak force. Confinement increases local gravitational mass density of the Einstein spacetime so the broken symmetry between gravity and weak force leads to mass gap. It is the Higgs mechanism which describes as particles acquire their mass. In the Everlasting Theory the Higgs bosons are in reality the Einstein spacetime components which today are the classical particles. The Everlasting theory shows that gravity acts due to the divergent jets produced by the Einstein spacetime components. Because the jets consist of the tachyons so the gravitational constant depends on the density of the modified Higgs field whereas the fine-structure constant, which characterize the electromagnetic interactions, depends on density of the Einstein spacetime. The ratio of the density of the Einstein spacetime to density of the modified Higgs field is tremendous i.e. in approximation 4·10 42 . It is the reason why the electroweak interactions are much stronger than the gravitational interactions. The Planck critical mass (in approximation 2.2·10 -8 kg), which is of the same order of magnitude as the geometric mean of the tremendous energy frozen inside a neutrino (not mass) and the very small mass of the neutrino (the geometric mean is in approximation 8·10 -8 kg), is very great in comparison with the mass of the Higgs bosons in the Everlasting Theory but in the Standard Model as well. It is the hierarchy problem. We can see that within the Everlasting Theory it is very easy to show the origin of the hierarchy problem. Just due to the phase transitions of the modified Higgs field, the energy frozen inside a neutrino is about 0.6·10 119 times higher than the mass of the neutrino. It is the reason why the mass of the Higgs bosons in the Everlasting Theory is such small in comparison with the Planck critical mass. This proves that the lowest excitation of a Yang-Mills theory without matter fields has the finite mass gap associated with the Einstein spacetime (the vacuum state). I proved also that the described confinement is valid in the presence of additional fermions as, for example, the two energetic neutrinos in the ball in centre of muon (the ball is responsible for the weak interactions). Such ball is in existence due to the confinement of the Einstein spacetime components. Due to the confinement described within the Everlasting Theory there can appear the sham Higgs bosons which masses are much greater than the Einstein spacetime components and the detected Higgs boson carrying mass 125 GeV, in reality, is the sham Higgs boson. Due to the Newtonian spacetime, i.e. in approximation the scalar field, there can appear the vortex-antivortex pairs which spin is equal to zero, for example, the pions. Due to the Einstein spacetime, i.e. the vector field, there can appear the vortex-antivortex pairs which spin is unitary, for example, the fermion-antifermion pairs in which the components are entangled. But to eliminate turbulences, the internal helicity of both types of bosons must be equal to zero. In the Everlasting Theory, the Newtonian spacetime is some analog to the Higgs field i.e. the massless scalar field. The Newtonian spacetime consists of the tachyons which have the inertial mass but have not the gravitational mass i.e. we can say that the Newtonian spacetime is the gravitationally massless spacetime. Mean spin of the tachyons is in approximation 10 67 times smaller than the reduced Planck constant (i.e. the h divided by 2π). This means that we can in approximation assume that the Newtonian spacetime is the scalar spacetime. The Einstein spacetime described within the Everlasting Theory is not invariant in respect of gauge group. It is because the Einstein spacetime components decrease pressure in the Newtonian spacetime near the components. The modified Higgs field, i.e. the Newtonian spacetime, introduced to the Yang-Mills Theory causes that this theory is renormalized. The mechanism which leads to the mass gaps in the massless gauge fields we can call the confinement. 145 Inside the four from five basic bare fermions, i.e. neutrinos, cores of baryons, bare electrons and protoworlds, there is tire-like torus and there is ball which represents an axis of a wheel. Moreover, the exchanged Einstein spacetime components inside the core of baryons and bare electrons between tire-like torus and ball-axis produce confinements which look as spokes. The spokes composed of the confined components of the Einstein spacetime appear outside the bare particles as well. In the quark-gluon plasma, the cores of baryons are tangent so there cannot appear confinement between the cores. Just due to the “deconfinement” the plasma behaves as liquidlike plasma. When energy of collisions increases then there arise more and more regions with acting the confinement i.e. the quarks can be screened and the gluons thickened. Interactions via thickened Einstein spacetime do not depend on distance between interacting particles. It is because the components of the Einstein spacetime cannot change their mass. There changes local mass density of the Einstein spacetime, not the mass of its components. Separation of particles interacting via the thickened Einstein spacetime causes that there arise new particles from the additional mass “created” due to the confinement. Due to the phase transitions in the Einstein spacetime or the internal structure of the core of baryons, the gluons and quarks re-organize themselves. It is the hadronization. The quarks inside baryons can arise only as the quark-antiquark pairs and spin of the components of a pair must be antiparallel. Hypernova: A stabilization of temperature inside a supernova or hypernova is due to transition of the hot electron-positron pairs into cold charged pion-antipion pairs. The mass of a magnetar is greater than mass of neutron black hole (its mass is approximately 25 times the mass of the sun) and smaller than 50 times the mass of the sun. When mass of a hypernova is greater than about 100 masses of the sun then there appears granulation of the hypernova leading to the rotating neutron tetra-black-hole. There the four magnetars laying on the same plane and rotating around axis perpendicular to this plane appear. The granulation is very energetic because the neutron black holes have strictly determined mass – the arising four neutron black holes, due to the gravitational collapse, emit tremendous energy and push the redundant mass out from the region between the black holes with very big force. Due to the very high angular momentum of the neutron tetra-black-hole, the redundant mass in its centre moves along the axis of rotation. There arise the jets. The more massive black holes than the smallest hypernova consist of the magnetars. There strictly determined number of the magnetars in the black holes appears. The number of the magnetars in a hypernova determines following formula D=4 d , where d=0,1,2,4,8,16,…- they are the numbers appearing in the Titius-Bode law. The next greater hypernova than described above should be 400 times greater than the mass of the sun. Inflation: It is the expansion with superluminal speed of a tachyonic concentration. During the inflation, there appear the binary systems of closed strings and the neutrino-antineutrino pairs the Einstein spacetime consists of. The entangled neutrino-antineutrino pairs (their speed is the c) stop the inflation. Interactions: The fifth force (fundamental) follows from the direct collisions of the tachyons. The known four interactions are associated with the Einstein spacetime. The binary systems of the closed strings a neutrino consists of transform the chaotic motions of the tachyons into the divergently moving tachyons. It produces gravitational gradient in the Newtonian spacetime but also in the Einstein spacetime. The gravitational constant G is associated with each neutrino. The exchanged regions of thickened Einstein spacetime are responsible for the weak interactions. Such exchanges take place when surfaces of the regions are in distance equal to or smaller than 3482.87 times the radius of the equator of the torus of neutrino. For the strong interactions are responsible the exchanges of the large loops (mesons) and binary 146 systems of the large loops (baryons) produced on the circular axis of the torus of the core of baryons. The virtual and real photons produce the electron-positron pairs in the Einstein spacetime. Their annihilations create divergently moving binary system of neutrinos fluxes in the Einstein spacetime. Such processes are responsible for the electromagnetic interactions. The unitary spins of the Einstein spacetime components enforce that the carriers of interactions have unitary spin also. Exchanges of the binary systems of the closed strings lead to the entangled photons and other entangled particles. This is the sixth force. K 2 constant: It is number of tachyons a closed string consists of. Large loop: Arises inside the torus of baryons and consists of weak dipoles. Limitations for gauge invariance: Gauge invariance of equations applied in the Theory of Fields is directly associated with the constancy of charges (weak charge, electric, strong and superstrong). The assumption that the charges are invariant leads to conclusion that following transformation of vector potential A: A’ = A + grad f (the gradient invariance), where f is an arbitrary function dependant on co-ordinates and time, and following transformation of scalar potential φ: φ’ = φ - c -1 ∂ f / ∂ t, where the sign “-“ follows from the definition of the distance ds in spacetime (the metric): ds 2 = x 2 + y 2 + z 2 – c 2 t 2 , causes that the equations are invariant under such gauge. What is origin of such gauge invariance? In the Everlasting Theory the charges are defined by properties of the tori inside the bare fermions. Due to the entanglement of the components the tori consist of, their interactions are saturated, i.e. they cannot interact with fields, but they polarize the spacetimes. Moreover, properties of the charges depend on the properties of the two spacetimes, for example, on their mass densities. This means that the constancy of charges was not valid in the era of inflation and in the protuberances of the Einstein spacetime just at the beginning of expansion of the Universe. This expansion took place after the era of inflation. The observational facts indeed show that the fine-structure constant varied in the era of quasars. The Everlasting Theory shows that when we add to vector potential associated with a charge a constant vector, for example, spin-polarized Einstein spacetime and/or to scalar potential an arbitrary constant, for example, the gravitationally massless Newtonian spacetime (today its mass density is constant) then such changes cannot change the charges. But the theory shows that the constancy of charges is not valid when the densities of spacetime(s) changes. For example, there was phase transition of the cosmic superstrong charge just before the start of expansion of our Universe. This means that we cannot apply the gauge invariance to such period, the same as to inflation. We can assume that the Universe is inside a blow-hole inside timeless space. Then today the properties of the spacetimes cannot change. It leads to the constancy of the charges. Due to the saturation of interactions concerning the charges, today there is some freedom in the Quantum Theory of Fields to define vector and scalar potentials but such freedom follows from the fact that we neglect the internal structure of the bare fermions. To eliminate the freedom, we must add the Everlasting Theory to the today mainstream theories. The nature chose only not numerous solutions. Lines of forces: Spins of binary systems of neutrinos (the weak dipoles) overlap with the electric lines. The magnetic lines are associated with spinning electric loops. Liquid-like plasma: The Everlasting Theory leads to an atom-like structure of baryons, therefore, also of the nucleons. The internal structure of neutrinos and new theory of their interactions show that it is very difficult to destroy the cores of baryons – they are the tori with mass in their centers and consist of the Einstein spacetime components i.e. of the binary systems of neutrinos. Inside our Universe, density of energy and mass is too low to compress the cores of baryons. The liquid-like plasma consists of the cores of baryons packed to the maximum. Local time: Inside the gas composed of tachyons, I define local time as being directly in proportion to the number of all direct collisions of free tachyons in some local volume of the 147 Newtonian spacetime. This analogy and definition is also relevant for the Einstein spacetime that is composed of weak dipoles. Local unit of length: The local unit of length is the local mean distance between free tachyons that the Newtonian spacetime consists of. This is also the case for the Einstein spacetime. Local unit of time: The local unit of time is the mean time between the direct collisions of free tachyons that the local volume of the Newtonian spacetime consists of. This is also the case for the Einstein spacetime. Magnetar: This is the neutron black hole. Its mass is greater than 25 times the mass of the sun and is smaller than 50 times the mass of the sun. Magnetic monopoles and magnetic force: Magnetic monopoles are not in existence. The entangled weak dipoles, an electron consists of, polarize the Einstein spacetime in such way that along the polarized lines arise the polarized virtual electron-positron pairs i.e. the virtual electric dipoles. Electric lines of forces are tangent to spins of the polarized virtual electronpositron pairs so to the spins of the weak dipoles as well. The whole structure is entangled. The torus of an electron is the locally polarized Einstein spacetime. It is spinning. It means that due to the entanglement, the virtual electric dipoles notice that the electric charge of the electron is spinning. Due to the entanglement, there appears force that tries to spin the polarized electric lines as well i.e. there appear forces perpendicular to the electric lines of the electron. We can call the force produced by the spinning electric charge the magnetic force. Due to the entanglement, the magnetic force is directly proportional to distance of an electric charge from the spinning electric charge so this force is directly proportional to the local spin speed of the spinning electric lines. The magnetic force is associated with the spinning electric charge so the magnetic intensity is the axial vector whereas the electric field intensity is the polar vector. The magnetic force appears only when electric charge is moving because only then the spin vector is polarized along the direction of velocity of the electron. It is due to the law of conservation of spin. When electron is in the rest or is moving very slowly then the direction of its spin changes randomly so the direction of the magnetic force as well. This means that the resultant magnetic force is zero. When an electron is moving then its spin is parallel or antiparallel to the velocity of the electron whereas the velocities of the virtual pairs associated with the motion of the electron as a whole, are parallel to the velocity of the electron. When magnetic force is not perpendicular to velocity of electron then the magnetic force is directly proportional to the vector product of the velocity of electron and magnetic intensity. The spins of electron and positron in a virtual electron-positron pair are parallel so the both magnetic forces are parallel. We can see that magnetic force can acts on an electric charge in massless electromagnetic fields as well. It could be more readable after the change of the term “magnetic forces” into “spin forces”. The magnetic field intensity is the axial vector so there cannot be in existence an object producing divergent or convergent lines of magnetic forces. The weak charge of neutrinos behaves similar to a magnetic monopole i.e. to a magnetic charge. The neutrinos arose due to the two succeeding transitions of the modified Higgs scalar field i.e. the Newtonian/fundamental spacetime. The neutrinos arose in the era of inflation so in the theory of inflation we should eliminate the magnetic charges and introduce the neutrinos. The weak charges/neutrinos broke the symmetry between the gravity and weak interactions. Mass: The inertial mass is directly proportional to the total volume of the tachyons a body consists of or to number of the closed strings a body consists of. Inertial mass is the more fundamental physical quantity than energy i.e. pure energy is not in existence without spacetime/field having inertial mass density not equal to zero. The gravitational mass is associated with the neutrinos so with the binary systems of neutrinos the Einstein spacetime 148 consists of also. For the Einstein spacetime, the gravitational mass is equal to the inertial mass. Mesons: They are multi-systems of large loops which are created inside the torus of baryons. They can also be mesonic nuclei that are composed of the other mesons and the large loops, or they can be binary systems of mesonic nuclei and/or other binary systems. They can be also the gluon balls or other loops and their associations Mind: A thought is composed of closed threads built out of the binary systems of neutrinos. The axes of these weak dipoles are tangent to the threads. Such closed threads can produce “lines” composed of polarized virtual electron-positron pairs so minds may interact with a brain or matter electromagnetically. M-theory: The M-theory contains the fundamental bosonic string theory plus the three superstring theories for which the fermion-boson symmetry is obligatory and plus the two heterotic theories which follow from the internal structure of the Einstein spacetime and structure of baryons. Muon: Due to the entanglement of the binary systems of neutrinos, the torus of muon looks as shrunk torus of electron. We can say that the torus of muons is a zero-energy entangled photon but it has mass because distances between the binary systems of neutrinos are shorter than in the Einstein spacetime. Such shrinkage is forced by the two additional rotating neutrinos inside the point mass of electron. These two additional neutrinos cause that the point mass of muon is the black hole in respect of the weak interactions. Muon decays due to the weak interactions – there is the emission of the two additional neutrinos. Neutrino ‘oscillations’: The exchanges of the free neutrinos for the neutrinos in the binary systems of neutrinos the Einstein spacetime consists of, lead to an illusion that the neutrinos oscillate. Neutrinos cannot oscillate due to the tremendous binding energy (not mass) – it is equivalent to approximately 4·10 50 kg. Neutrinos and lacking dark energy: Neutrinos appear as a miniature of core of a proton. Neutrinos are composed of closed strings. The external radius of the torus of a neutrino is approximately 1.1·10 -35 m. There are the entangled binary systems of neutrinos (mass of one binary system is approximately 6.7·10 -67 kg) which in the today Universe behave as the classical particles. There are the photon galaxies and the wave functions describe their behaviour. The c is the natural speed of the entangled photons and gluons (today they are the quantum particles) in the gravitational gradients produced in the Newtonian spacetime. Almost all neutrinos are in the binary systems. The spins of almost all binary systems of neutrinos do not rotate because bound tachyons tend to behave in a similar way to free tachyons. The Planck time is typical for lifetime of the local Einstein spacetime in an excited state i.e. in a state when the spins of the binary systems of neutrinos rotate. It is very difficult to detect the non-rotating-spin binary systems of neutrinos because they cannot transfer energy to a detector. Neutrinos are very stable particles – we do not see the bi-products of neutrino-antineutrino annihilations. My theory leads to the conclusion that the internal energy of a neutrino is approximately 0.6·10 119 times greater than the energy of a neutrino resulting from the formula E=mc 2 . This is because neutrinos are built of closed strings at a superluminal speed (approximately 2.4·10 59 times greater than the speed of light in spacetime). The tremendous amount of energy frozen inside neutrinos excludes the creations of neutrino-antineutrino pairs in a manner similar to, for example, electron-positron pairs. The new neutrinos are bi-products of the decay of the rotating-spin or non-rotating-spin binary systems of neutrinos. The frozen energy inside neutrinos is lacking dark energy. A field composed of free binary systems of closed strings does not exist, therefore, the transformation of their rotational energy into mass is impossible. The exchanges of the binary systems of the closed strings between the binary systems of neutrinos produce the entangled photons and other particles. Such phenomena led to the visible distribution of the galaxies. There are only 149 four different states of neutrinos - the taon neutrino is not in existence. The divergently moving tachyons (order) produced by the closed strings a neutrino consists of create a gradient of pressure in the Newtonian spacetime (chaos) in such way that pressure is lower in places where mean density of the divergent jets is higher. This means that there is created ‘niche’ in the Newtonian spacetime (i.e. the mean distance between the free tachyons is greater) so time is going slower. This is the mechanism responsible for how neutrino acquires its own gravitational field by interacting with the Newtonian spacetime. The attractive gravitational force and the gravitational potential energy are associated with gradient of negative pressure in the Newtonian spacetime. To describe neutrino, built up of the closed strings, we need 26 mathematical and physical quantities. Neutron black hole: Its mass is about 25 times the mass of the sun. Newtonian spacetime: Ideal gas composed of tachyons. Only very near the surfaces of the closed string is the Newtonian spacetime highly deformed. Outside closed strings, because of the superluminal speed of tachyons i.e. because of the tremendous amount of pressure found in the Newtonian spacetime, this spacetime behaves like a liquid-like substance. For interactions lasting longer than about 10 -60 s, the Newtonian spacetime appears as a continuous medium. Non-perturbative theories: There are in existence the stable tori, stable core of baryons and stable states associated with the atom-like structure of baryons. Even the unstable particles for the period of spinning are the stable objects. To describe such objects we can apply the nonperturbative methods. For electrons, the non-perturbative and perturbative stadiums are separated whereas for baryons they are in existence simultaneously. The non-perturbative theories are obligatory for all energies. The stable states we can describe via simple formulae in which the time does not appear. In the mainstream theories, there is tremendous number of unsolved basic problems associated with the stable structures. The non-perturbative Everlasting Theory is the lacking part of the ultimate theory and is the foundations of the correct mainstream theories. Perturbative theories: These theories concern the phenomena associated with the disappearances of the circular and point/ball masses of the electrons and sham quarks. They lead to the diagrams. The number of the disappearances increases when energy increases. This means that the perturbative theories should lead to wrong results for low energies. Phase space: The set of numbers and quantities needed to describe position, shape and motions (internal motions also) of an object. For example, the phase space of a tachyon has 6 elements, for a closed string is 10 whereas for neutrino 26. Phase transitions: The theory of liquid leads from tachyons packed to maximum to the closed strings whereas the saturation of the interactions of tachyons due to the fundamental force leads from the closed strings to the neutrinos, cores of baryons and protoworlds. Photon galaxies: They arose due to the succeeding decays of the superphotons that were produced in the cosmic loop. Each carrier of the photon galaxies is composed of 4 16 entangled neutrino-antineutrino pairs. The arrangement of the components of a carrier of photon galaxy changes over time but for defined arrangements, the photon galaxies are the stable objects. The speed c is the speed of wave functions describing the photon galaxies but also the speed of a photon galaxy in its defined arrangement. Due to the entanglement, we cannot measure the speeds and energies of the components of a photon galaxy. Due to the entanglement of the components of a carrier of photon galaxy, the total energy and the speed c of a photon galaxy are disclosed in the detectors when at least one component of carrier of photon galaxy interacts with a detector. Localization of a photon galaxy changes over time i.e. it disappears in some region of the Einstein spacetime and appears in another one, and so on. Such quantum behaviour of a photon galaxy describes its wave function. Its looks similarly as for an electron but in an electron apart from the entanglement there appear the short-distance 150 weak interactions i.e. the regions in the Einstein spacetime which have higher mass densities than the mean mass density. The weak interactions appear when the additional mass density is the same or higher than in the point masses of electron and proton i.e. in approximation 2.731·10 23 kg/m 3 i.e. about 40,363 times lower than the mean mass density of the Einstein fundamental force” in Chapter “Interactions”. For such density, the mean distances between the neutrino-antineutrino pairs are (40,363/(40,363 + 1)) 1/3 = 0.9999917 times lower than the mean distances in the Einstein spacetime. The weak interactions contrary to the entanglement cause that there appear the relativistic particles and the mass gap in the Yang-Mills theory. Photons: Quanta of energy carried by entangled binary systems of neutrinos. Mass of photons (i.e. of the rotational energy i.e. of the excitations of the Einstein spacetime) is equal to zero. The Everlasting Theory shows that the Einstein formula E=mc 2 is wrongly interpreted. The transition from pure energy (the mass is zero) into mass is impossible without the Einstein spacetime having mass density not equal to zero. Inertial mass is more fundamental physical constant than energy. To know how particles acquire their relativistic mass we must know internal structure of Einstein spacetime. The cited Einstein formula is correct due to the laws of conservation of spin and energy. The wave functions describe behaviour of the entangled neutrino-antineutrino pairs. The c is the speed of the entangled photons and gluons (today they are the quantum particles) in the gradients produced in the Newtonian spacetime. We can see that the invariance of the c leads to the quantum physics. In the today Universe, a single neutrino is the classical object then its speed can be superluminal as well but most of them are moving with the speed c because they appear mostly due to the decays of the carriers of the photons. Pieces of space: They are the internally structureless tachyons. In different regions of cosmos (in a cosmic scale) speeds of tachyons (so also sizes) can differ. There can be regions in which the pieces of space are moving with subluminal speeds or can be in rest. Pion: It is the binary systems of the large loops produced on the circular axis (it is the electric charge, i.e. the circle, on that the lines of electric forces converge) inside the torus in core of a baryon. Planck critical physical quantities: The critical values are defined for a cube whereas there is the torus of the neutrinos so the calculated values are not consistent with the Planck critical values but should be close to them. Moreover, the reduced Planck constant is for binary system of neutrinos, not for a neutrino. Volume of the Einstein spacetime component, i.e. the binary system of neutrinos, is V = 2π(π + 1)rneutrino 3 /3 = 12.138·10 -105 m 3 . Such volume for a cube leads to the side equal to 2.298·10 -35 m (the Planck length is 1.616·10 -35 m). The energy frozen inside neutrino is equal to mass of protoworld. The geometric mean of this energy and mass of neutrino is 8.087·10 -8 kg. Mass of a binary system is two times greater 16.174·10 -8 kg whereas the Planck mass is 2.177·10 -8 kg. But most important is the mass which defines the lowest temperature in which appears the liquid composed of the Einstein spacetime components. This mass is some analog to the mass 282.93 MeV for the liquid-like plasma composed of the cores of baryons. Such mass for the neutrinos is in approximation 282.93/727.44 = 0.38894 times lower than the mass of neutrinos. For such mass, the critical mass density for binary systems of neutrinos is 5.18·10 96 kg/m 3 whereas the Planck critical mass density is 5.16·10 96 kg/m 3 . Proton: The core of proton is composed of binary systems of neutrinos. It has a point and circular mass. Due to the emission and absorption of virtual particles and their subsequent decay tunnels appear in the Einstein spacetime i.e. holes arise in a field composed of binary systems of neutrinos. This leads to the Titius-Bode law for strong interactions. Within tunnels can be relativistic pions that are in the S state. In proton, there is only one relativistic pion and 151 it is under the Schwarzschild surface for strong interactions so the proton is a stable particle. Meanwhile, baryons possess an atom-like structure. Protoworlds: Protoworlds consist of nucleons and arise after the period of inflation. Their radius is approximately 2.7·10 24 m. The torus of it consists of deuterium. In centre of the torus is the mass composed of the neutron black holes. Immediately before the ‘soft’ big bang the Protoworldneutrino transition was possible only if the objects had a strictly determined mass (approximately 1.9·10 52 kg). The dark energy is the remnant of such transition and is composed of the additional non-rotating-spin neutrino-antineutrino pairs i.e. inside our Universe the density of the field composed of binary systems of neutrinos is higher than it is outside it. This is the positive pressure reducing the negative pressure in the spacetime created by the mass of our Universe. The Universe arose in a similar way to large loop composed of binary systems of neutrinos, inside the torus of the core of baryons. Such large loops are responsible for strong interactions. When dark energy appeared the very young Universe (mass of which was approximately 1.8·10 51 kg), which was the cosmic loop composed of neutron black holes grouped in larger structures, started to expand. This was due to the repulsive force produced by dark energy and the energy emitted during the production of the first atomic nuclei. The photon galaxies that couple the cosmic structures, lead to the illusory part of the dark matter. Dark matter also consists of the remnants of the big stars. They are composed of iron-nickel lumps. Detecting these lumps is extremely difficult because their temperature is equal to cosmic microwave background radiation. The interior of a sphere filled with baryonic matter contains approximately 5% visible matter, 21% dark matter and 74% dark energy. Protoworlds developed as protoworld-antiprotoworld pairs from positive fluctuations of the field composed of non-rotating-spin binary systems of neutrinos. Pulsars: Similarly as for the Sun, the magnetic axis of pulsars associated with the spots rotates. There are two pulses per period. We obtain the correct results when we assume that pulsar/star period of rotation of magnetic axis associated with the spots T is directly proportional to surface of these cosmic objects (surface = 4πr 2 ) and the factor of proportionality is f = 1.15·10 -10 s/m 2 (T = f·4πr 2 ). For the Sun (r = 6.96·10 8 m), we obtain T = 7·10 8 s = 22.2 years. For the surface of the biggest neutron star (r = 3.7·10 4 m), we obtain T = 2 s i.e. mean time distance between pulses should be in approximation 1 s. Over time, due to the surface processes, the pulsars increase their radii so the period T increases as well. For smaller pulsars the periods T are shorter. On surface of each pulsar arises the Fe crust and very thin layer of plasma composed of protons, ions and electrons. To decrease the pressure on surface of pulsars and stars there appear the charged vortices composed of protons and ions. Their magnetic axes are perpendicular to the surfaces of the pulsars and stars in such a way that magnetic polarization of the opposite vortices is the same. Similarly as in a photon, a resultant magnetic polarization should be perpendicular to velocity of pulsar in relation to the Einstein spacetime. Due to the rotation of mass, there arises circular positive current in the plasma overlapping with the equator of pulsar or star. Due to the vortices on surface, on the circular current acts the Lorentz force so the axis of the circular current rotates. The half of the period of such rotation is the mean time distance between the pulses emitted by pulsar. Due to the very thin and wide circular currents, the Fe crust is polarized along the meridians associated with the axis of rotation of mass. The electric lines of forces are tangent to the parallels associated with the rotating electric current. We can see that due to the magnetic polarization of the Fe crust and rotation of the magnetic axis of the circular current, on the plasma acts the Lorentz force so there appear the radial oscillations of the protons, ions and electrons. Due to the interactions of the protons and ions with the crust and neutrons, such oscillations produce the linearly polarized frequencies. Such is the main pulsar clock and radiation mechanism. An observer 152 sees the pulses when the direction of observation lies on the rotating plane on which the circular current lies. Now I describe the phenomena which lead to the γ-ray frequencies on base of the integrated pulse profile of the Crab pulsar. When the oscillating protons collide with the crust of a pulsar and neutrons then there is produced helium. We can distinguish three stages. At the beginning, the nucleons are in following mean distance d: d = (4πA/3 + 4πA/(3cosα))/2 = 2.939825 fm, where tgα = 1/(2π). During this stage the emitted energy is proportional to mass/energy of the large loop 67.5444 MeV. Next, the distance is r = A + 4B = 2.704800 fm. During this stage the emitted energy is proportional to the mass/energy of the S pion on the d = 4 Titius-Bode orbit for the strong interactions 186.886 MeV. We can see that the change in distance is x = d – r = 0.2350245 fm. In the third stage there is transition to the alpha particle and the side of the square is y = 1.912583 fm. During this stage the emitted energy is proportional to mass/energy of the neutral pion 135 MeV. In the integrated pulse profile of the Crab pulsar we should see the three peaks and because time distances between the subpulses in the average pulse shape is directly proportional to the ranges x and y then the ratio of the time distance between the third and second subpulse to time distance between the second and first should be y/x = 8.14. The observational data lead to 13.37 ms/1.64 ms = 8.15. We can see that the theoretical result is consistent with the observational facts. We can see also that the ratio of the amplitudes of the energy fluxes for the three peaks should be 67.5444 : 186.886 : 135 ≈ 1 : 2.8 : 2 i.e. the amplitude of the first subpulse should be lowest whereas of the second highest. The obtained results for the amplitudes are consistent with the observational facts as well. Partially the energy emitted as the ä-ray frequencies interacts with the oscillating electric charges in the plasma so there appear the radio, optical and X-ray frequencies as well. The exact pulse profile at the ä-ray frequencies we can observe at the radio frequencies associated with the oscillating free electrons. It is because inertia of the free electrons is much lower than the ions (ions produce the optical frequencies) and electronpositron pairs interacting with ions (the X-ray frequencies arise due to the annihilations of the pairs). Now on base of the Everlasting Theory we can calculate the effective temperature. Due to the four neutrino symmetry, the pions can be composed of 2·4 16 neutrinos so there arise regions containing 2·4 16 entangled nucleons. From the Wien law follows that temperature of the large loop (circumference is 4πA/3) is T ≈ 10 12 K. We can see that temperature of the regions containing 2·4 16 entangled nucleons can be 2·4 16 T ≈ 10 23 K. The obtained theoretical result is consistent with the observational data. Besides the helium production there are the synchronized beta decays and the HeFe transitions. Such pulses are much more rarely but their energy should be higher than the average. With time, due to the surface processes (i.e. the HeFe transitions), the period of rotation of the magnetic axis associated with the spots increases because volume of star increases. QCD and Everlasting Theory: There are eight 3-coloured gluons and six 1-coloured basic sham quarks. The binary systems of neutrinos are the carriers of the massless gluons and photons. In the strong fields, due to the internal helicity of the core of baryons, we must take into account the three internal helicities of the binary systems of the neutrinos - this leads to the eight gluons. Since outside the strong fields the internal helicity of fields is equal to zero then the internal structure of the carriers of gluons and photons is not important. The gluons ‘transform’ into photons. The quarks are in existence only in the fields composed of gluons. Quantum gravity: The neutrinos are the ‘carriers’ of the gravitational constant. There are only 4 different neutrinos (the electron neutrino and its antineutrino and the muon neutrino and its antineutrino). The graviton could be the rotational energy (its mass is zero) of particle composed of the four different neutrinos in such way that the carrier of graviton could be the 153 binary system of binary systems of neutrinos with parallel spins, i.e. spin of graviton is 2. The neutrino bi-dipoles behave as two entangled photons. This means that gravitons and gravitational waves are not in existence. Gravitational energy is emitted via the flows in the Einstein spacetime composed of the non-rotating-spin neutrino bi-dipoles. The neutrinos, binary systems of neutrinos, bi-dipoles of neutrinos, and so on, produce the gradients in the Newtonian spacetime that is impressed on the Einstein spacetime too. We can describe the gravity via such gradients. When time of an interaction is longer than about 10 -60 s then particles interacting gravitationally, electromagnetically, weakly and strongly ‘see’ the Newtonian spacetime as a continuum and we can apply the Einstein equations and Noether theorem. Such continuum leads to the symmetries and laws of conservation. Since spin of the neutrino bi-dipoles is 2 whereas of the neutrinos is 1/2, then the gravity leads to conclusion that the neutrinos have only two flavours i.e. there are in existence only four different neutrinos. The tau neutrinos are not in existence. The Kasner solution for the flat anisotropic model (1921) in the General Theory of Relativity is the foundations of the Quantum Gravity and Quantum Physics without singularities and infinities. The Quantum Gravity was valid only in the era of inflations. In this era, the neutrino-antineutrino pairs behaved similarly as the electron-positron pairs. Quantum particles: See ‘Renewable particles’. Quantum Theory of Fields limitations: Perturbative theories can be the complete theories when each order of perturbation describes different interaction/phenomenon. Each perturbative theory which in next its order describes the same elementary phenomena but more complex, we always can replace with a non-perturbative theory. It is because in such perturbative theories we neglect some interactions/phenomena which follow from the internal structure of the bare particles, for example, of bare electron or the core of baryons described within the Everlasting Theory. The applied functionals cannot fully describe the all possible interactions of the bare particles with spacetime and fields. This causes that there appear the free parameters, minimal subtraction, sliding scale, renormalization, limitations and so on. The internal structure of the bare particles cannot be described within the methods applied in the mainstream Quantum Theory of Fields. The Everlasting Theory shows that in the QED we neglect the weak interactions of the bare electron and its internal structure – there is the torus/electric-charge and the ball in its centre responsible for the weak interactions. To detect the torus of bare electron we must apply new methods because the torus is only the polarized Einstein spacetime. Describing the asymptotic freedom within perturbative theory we neglect the coupling of the core of baryons with the Einstein spacetime. It is very difficult to describe mathematically the distribution of matter inside the bare fermions applying the mathematical methods typical for the non-Abelian gauge theories. Just we cannot add structure of the bare fermions to Lagrangian. The core of baryons is the stable structure so it is very simple to describe its structure within classical non-perturbative theory. On the lowest levels of the nature the physics behaves once again classically. The applied methods are even simpler than in the Newtonian mechanics. It means that the methods applied in the mainstream quantum theory of fields are useless to eliminate the parameters applied in such theory. There are the two or three parameters which do not appear in the Everlasting Theory. In the perturbative QED and QCD there are two assumptions which cause that we can fit theoretical results to experimental: 1. We plan how a function should look, respectively the field normalization Z in the QED and the beta function in the QCD. 2. We introduce some absolute parameters, respectively the mass and charge of electron and the absolute value for the alpha_strong = 0.1182 ± 0.0027 for the mass of the Z boson (2004). But we cannot say that the mainstream perturbative theories are useless. From them we can decipher many properties of the introduced fields, describe some symmetries and so on. 154 Real photons: In contrary to the virtual photons they have mass equal to zero. They are the excitations (rotational energies) of the Einstein spacetime. For massless particles, the coupling constants are equal to zero because such particles cannot create gradients in the spacetimes and other fields (they ‘slide’ along a field). Real photons can carry the electromagnetic interactions only when scattered on electric charges produce the virtual and/or real electronpositron pairs. In annihilations of such pairs, arise virtual and/or real photons. Renewable particles: The quantum particles disappearing in one place of Einstein spacetime or strong fields and appearing in another and so on. They are the real and virtual electrons and photons in the Einstein spacetime, the real or virtual bosons in the strong field inside the baryons, and so on. Their states describe the wave functions. Running coupling of strong interactions: When we accelerate a baryon then to conserve its spin, mass of the large loops responsible for the strong interactions must decrease so value of the strong coupling constant decreases also. There appears an asymptote for value in approximation 0.1139. Small loops: They are the small loops composed of the binary systems of the closed strings and produced on surface of the torus of neutrinos. Their circumferences are 2πr and 2πr/3, where r denotes the radius of the equator of the torus of neutrinos. ‘Soft’ big bang suited to life (the ‘soft’ big bang): The big bang of the cosmic loop suited to life that arose inside the Protoworld. In such cosmic loop were produced the precursors of the DNAs. Soliton: Is the tangle of closed threads composed of weak dipoles and produced by a tangle of circular electric currents. Speeds: Due to the properties of the closed strings and the tremendous speed of tachyons, the gradients/gravitational-fields produced by the divergently moving tachyons are ‘attached’ to masses. The speed of light c is the natural speed of the carriers of the photon galaxies and gluon galaxies, i.e. of the entangled neutrino-antineutrino pairs, in the locally dominating gravitational field. This is because the entangled photons and gluons are the quantum particles i.e. their states define the wave functions. The redshift can be due to the changing potential of gravitation or due to the transitions of photons from one dominating gravitational field to another when distance between centers of the gravitational fields is changing. For example, such divergent fields appeared in the Einstein spacetime (the protuberances) at the beginning of the ‘soft’ big bang. The second phenomenon is beyond the mainstream Theory of Relativity and is responsible, for example, for the redshift higher than 1 for the distant cosmic objects. There are also motions of static gravitational gradients in static Newtonian spacetime and almost static Einstein spacetime (the dark energy causes that the second spacetime is nonstatic). Due to the properties of the gas composed of tachyons, the protuberances in the Einstein spacetime with speeds higher than the c in relation to the centre of the ‘soft’ big bang, were quickly damped. This new interpretation eliminates the wrong conclusion that the Universe without any reason accelerates its expansion and leads to conclusion that our Universe is older. When velocity of a cosmic object is the same as the local dark energy then mass of the cosmic object is equal to its rest mass. Today, in our Universe, the neutrinos are the classical particles so similarly as the tachyons they can be the superluminal particles too. Objects greater than a neutrino consist of the binary systems of neutrinos. This means that to travel with superluminal speeds we must create protuberances in the Einstein spacetime. To do this we need tremendous energies. Spin: Half-integral spin is more fundamental physical quantity than even gravitational constant associated with internal structure of neutrinos. This is true because neutrinos consist of the closed strings that have the half-integral spin. 155 Spinor: Spinor is the generalization of vector and tensor. Most important is spinor space associated with the Lorentz transformation because it describes the fermions that have halfintegral spin, for example, neutrinos and electrons. Since the Einstein spacetime consists of the binary systems of neutrinos, there must be the 720 degree turns of neutrinos to obtain spin of the Einstein field components (i.e. the 1). From this follows that spinor changes sign due to the 360 degree turns. Strong charge: This is the torus inside the core of baryons. Its mass is X = 318.3 MeV. Inside the strong field it behaves as the strong charge/mass carrying the same electric charge as positron whereas outside the strong field, due to the gluon-photon transitions, it behaves as electric charge of positron. Strong interaction: This interaction takes place because of the gradient created in the Einstein spacetime by divergently moving large loops or groups of large loops arising inside the torus of the core of a baryon. Whereas the tunnels in the Einstein spacetime, responsible for the strong interactions also, arise as result of the symmetrical decays of the groups composed of the four virtual remainders. Supernovae producing neutron stars: In the central part of the core of sufficiently big star is liquid-like plasma producing the quanta that have energy equal to approximately 283 MeV. This energy corresponds to the lower limit of temperature of the liquid-like plasma i.e. approximately 4·10 12 K. A stabilization of temperature inside core of such star is due to the transitions of the thermal energy into cold charged pion-antipion pairs (their mass/energy is approximately 280 MeV). Since mass of neutron (939.6 MeV) leads to mass of neutron black hole equal to approximately 25 times the mass of the sun then the 283 MeV leads to the lower limit of mass for neutron star approximately 25·283/939.6=7.5 times the mass of the sun. Mass of neutron stars is greater than 7.5 times the mass of the sun and smaller than 25 times the mass of the sun. Due to weak interactions, carriers of photons (i.e. the entangled binary systems of neutrinos the Einstein spacetime consists of) appearing in decay of pions in liquidlike plasma decay to neutrinos. Since emitted energy is directly in proportion to coupling constants then for one part of energy carried by photons (coupling constant is approximately 1/137) are 137 parts of energy carried by neutrinos (coupling constant for strong interactions of pions is 1). This leads to conclusion that 100%·137(1+137)=99.3% of energy released in explosion of supernovae carry neutrinos whereas 0.7% carry the photons. Supernova Ia: A stabilization of temperature in core of such star is due to the transition of the thermal energy into the point mass of muons (point mass is approximately 105.67/2=52.83 MeV). Since mass of neutron (939.6 MeV) leads to mass of neutron black hole equal to approximately 25 times the mass of the sun then mass 52.83 MeV leads to mass of Ia type supernova approximately 25·52.83/939.6=1.4 times the mass of the sun. Superphoton: Superphoton is left-handed double helix loop that is composed of 2·4 32 entangled photons (there are 2·4 16 photon galaxies i.e. about 4 billion photon-galaxy pairs). Each helix loop is composed of 256 megachains. Antisuperphoton is right-handed double helix loop. Carrier of photon, i.e. the binary system of neutrinos, has spin equal to 1 and is perpendicular to the axis of a superphoton. There are produced spin waves in the carriers of the superphotons. In fact, superphotons arise as entangled gluons that become the photons outside the strong field. Supertachyon and cosmic bulb: Supertachyon is a hypothetical tachyonic condensate which mass is in approximation equal to the sum of masses of the Protoworld (i.e. after the period of inflation) and the cosmic loop i.e. about 2.2·10 52 kg. During a collapse of a region of the Newtonian spacetime pressure increases so also speed of tachyons. This means that mean radius of tachyons decreases. When such supertachyon expands, in the surrounding Newtonian spacetime composed of slower tachyons, there arises shock wave that can create a cosmic bulb composed of pieces of space packed to maximum. In different cosmic bulbs, the 156 initial four between the six parameters can have different values. Sizes of cosmic bulbs can be tremendous in comparison to the today radius of our Universe. Tachyons: All particles are composed of structureless tachyons that have a positive inertial mass. In our region of the Newtonian spacetime they are moving approximately 8·10 88 times faster than photons in spacetime. The unchanging mean speed of free and bound tachyons defines the mean radius of tachyons and leads to the relativity and to the law of conservation of energy. The high mean linear speed and viscosity leads to the granulation of the eternal and internally continuous substance. This is because for smaller radii of the tachyons, the interaction time of them, in direct collisions, is shorter and the area of contact is smaller. This means that, for strictly determined radii, the grinding of tachyons stops. The tachyons only interact because of direct collisions – such interactions are associated with the dynamic viscosity of tachyons resulting from the smoothness of their surfaces. In such a spacetime there are only possible the four succeeding phase transitions that lead to stable objects. As tachyons only interact because of direct collisions (they are bare particles), the gas-like Newtonian spacetime composed of structureless tachyons fills whole volume of our cosmic bulb. The trajectories of tachyons take all possible directions (chaos). With our region of the Newtonian spacetime, only one set of physical laws is associated. The inertial mass of a tachyon is directly proportional to the volume of it. The spin of a tachyon is approximately the amount 10 to the power of 66 smaller than the Planck constant so they are practically zerospin bosons. The direct and indirect evidences that there are in existence the superluminal particles are as follows. There are the superluminal neutrinos. Entangled photons show that they can communicate with speeds higher than the c. The wave functions fill the whole our Universe. The wave function describing our Universe can be the coherent mathematical object if the very distant points of the wave function can communicate with speeds much higher than the c. We can say that coherent quantum physics needs the tachyons. Also the Michelson-Morley experiment leads to conclusion that masses emit the tachyons. The total energy T we can define as the sum of the energy E which appears in the General Relativity (the GR) and the imaginary energy N associated with the Newtonian spacetime: T = E + iN, where i = sqrt(–1). The word ‘imaginary’ means that the free tachyons have broken contact with the wave function describing the state of our Universe. In the GR we apply the formula for energy in which the mass M is for inertial mass equal to gravitational mass. The tachyons cannot emit some objects so they have the inertial mass m only. Substitute ic N = – imc 2 /sqrt(1 – v 2 /c 2 ) i.e. N = mc 2 /sqrt(v 2 /c 2 – 1) . The m is in proportion to volume of tachyon i.e. m = aV so N = aVc 2 /sqrt(v 2 /c 2 – 1). We can see that when the speed v of a tachyon increases then its energy decreases. It is possible only due to the higher grinding of tachyons when they move with higher speed. We can see that the GR leads to the Newtonian spacetime i.e. to the fundamental imaginary spacetime. We can see also that the GR is the more fundamental theory than the Quantum Physics. The Quantum Physics appears on higher level of nature and is associated with the excited states of the Einstein spacetime. From the formula T = E + iN follows that there are in existence two spacetimes i.e. the Einstein spacetime and the imaginary Newtonian spacetime. The phase transitions of the imaginary Newtonian spacetime lead to the Einstein spacetime also. Tau lepton: It consists of an electron and massive particle, created inside a baryon, which interact with the point mass of an electron. Tensor field: Tensor is the generalization of scalar and vector. There are the two spacetimes. The Newtonian spacetime consists in approximation of scalars i.e. of the spinning tachyons 157 which spin is about 10 66 times smaller than the Planck constant. The Einstein spacetime consists of the neutrino-antineutrino pairs i.e. the weak dipoles. The gravitational gradients produced in the Newtonian spacetime by the binary systems of neutrinos are impressed on the Einstein spacetime too. In the today Universe, the gravitational energy is lost due to emissions of the non-rotating-spin neutrino bi-dipoles. Titius-Bode law: It is obligatory for the strong interactions inside baryons and for the gravitational interactions near neutron black holes and their associations. The ratio A/B in the formula R=A+dB (for strong interactions d=0,1,2,4, whereas for gravitational d=0,1,2,4,8,16,32,64,128) for both interactions is in approximation 1.39. Tunnels in the Einstein spacetime: When virtual particles decays into two parts moving in opposite directions, a hole in a field composed of binary systems of neutrinos is created in place of decay. Such a set of holes creates a tunnel. Ultimate Theory: There must be in existence theory which leads to the initial conditions in the General Theory of Relativity (the GR) and the Quantum Theory of Fields (the QTFs). Such theory must explain origin of the basic physical constants as well. We can call such theory the lacking part of the ultimate theory. We cannot formulate such theory on base of the methods applied in the QTFs. It is because the GR and QTFs neglect internal structure of the bare fermions. In reality, there is torus and ball in centre of it. We cannot describe mathematically such structure applying the mathematical methods typical for the QTFs to add this structure to Lagrangian. Just we must apply new methods. The bare baryons, i.e. the cores of baryons, are the stable structures, whereas the bare electron is stable for period of spinning. Moreover, the nature on its lowest levels, once again behaves classically. These facts cause that the lacking part of the ultimate theory is the very simple non-perturbative classical theory. The Everlasting Theory is the lacking part of the ultimate theory and this theory shows how the new methods should look. It is very easy to distinguish the more fundamental theories from the incomplete. A more fundamental theory should lead to the initial conditions applied in the incomplete theories, should contain less the parameters and solve more fundamental problems. The two long-distance interactions, i.e. gravity and electromagnetism, lead to two spacetimes. To explain the inflation, existence of wave function and constancy of the speed of light we need fundamental spacetime composed of tachyons. I call such spacetime the Newtonian spacetime or the modified Higgs field. The modified Higgs field, i.e. the tachyon gas, behaves as liquid due to the tremendous pressure in this spacetime. The Reynolds number for such spacetime leads to the closed-strings/vortices composed of the tachyons. Their spin is halfintegral. The inflexible binary systems of the closed strings arise due to the first phase transition of the modified Higgs field. We can say that the reduced Planck constant (i.e. the h divided by 2π) is the most fundamental physical constant. The second phase transition leads to the Einstein spacetime, third to the core of baryons whereas the fourth to the new cosmology. Due to very high temperature or strong fields there appear the symmetrical decays of the mesons. It leads to the Titius-Bode law for the strong interactions outside the core of baryons and for the gravitational interactions outside black holes. Today, i.e. in present state of the Universe, the quantum theory is characteristic not for the Einstein spacetime components but for the phenomena which take place in the Einstein spacetime. The modified Higgs field is classical also. Just today the quantum physics is valid in some interval for sizes. Of course, the GR was the quantum theory but only in the era of inflation i.e. the states of the Einstein spacetime components in the era of inflation we can describe via wave functions. This means that within the GR we should find a solution which at least partially should lead to the internal structure of the bare particles which is neglected within the Quantum Theory of Fields. And it is the Edward Kasner solution (1921) for the flat anisotropic model. The mainstream classical GR does not concern the inflation so unification 158 of the GR and the quantum FTs is impossible i.e. we never will able to describe these two theories within one coherent mathematical description. But it is possible to formulate more fundamental theory which leads to the two sets of initial conditions from which the mainstream theories start and the Everlasting Theory is such fundamental theory. Inflation concerns the Newtonian spacetime. The Einstein spacetime components cannot move with speed higher than the speed of light because the c is the natural speed of the Einstein spacetime components in the modified Higgs field. Our Universe cannot expand with speed higher than the speed of light so acceleration of expansion is impossible. Origin of the observed “acceleration” is different and follows from the fact that the GR is the incomplete theory. Generally, the GR is the correct theory but the initial conditions are incomplete so there appear the incorrect conclusions as, for example, existence of the gravitational waves or time loops. Due to the incompleteness there can appear unknown phenomena which concern, for example, the evolution of black holes. The QED and QCD are the perturbative theories whereas the Everlasting Theory is the nonperturbative theory. Why the ultimate theory must contain the non-perturbative and perturbative theories? The ground state of the Einstein spacetime consists of the non-rotatingspin neutrino-antineutrino pairs. The total internal helicity of this state is zero and it consists of particles which spin is unitary. In such spacetime, cannot appear loops having internal helicity i.e. carrying mass. In reality, a unitary-spin loop (the loop state) is the binary system of two entangled half-integral-spin loops (total spin is 2·1/2 = 1) with opposite internal helicity i.e. the resultant internal helicity is zero. Then in such spacetime do not appear turbulences. Such loop can easily transform into a fermion-antifermion pair (the fermion state). Perturbation theories concern the loop states whereas the non-perturbative theories the fermion states. In non-perturbative theory such as the Everlasting Theory, we cannot neglect the internal structure of the bare fermions (there is torus and ball in its centre and virtual pair(s) of fermions outside bare fermion). In the QED the both states, i.e. the loop state and fermion state, are separated in respect of time whereas in the QCD are not. Moreover, the QED and Everlasting Theory are energetically equivalent so within these theories we should obtain the same theoretical results. In baryons, the both states are valid all the time but the non-perturbative fermion state dominates at low energy whereas the loop state dominates at high energy. But it is easier to describe the liquid-like plasma within the fermion state. Since there are the creations from loops and annihilations to loops of the fermion-antifermion pairs so both states (loop and fermion) are energetically equivalent but the bare-fermion state is mathematically much simpler. Why there are valid the perturbation expansions? Due to the physical laws, the energy spectrum is quantized. To fit some energy of interaction to the quantized energy spectrum, most often there are many carriers of interactions in one event of interaction. At first, the nature chooses a quantized energy from the spectrum close, but smaller, to the energy of interaction. It is the one-loop interaction described by the first order in perturbation expansion. When particles interact then the carriers of an interaction cannot be in the same state. This means that to fit the energy of interaction to the quantized energy-spectrum, there must appear the higher orders containing 2 entangled loops, 3 entangled loops, 4-loops and so on. But most important in the perturbative theories is the fact that there must appear the changing sliding scale. Only then the higher and higher orders in a perturbation expansion are smaller and smaller. The sliding scale does not solve the problems at low energy (the coupling constants are great) because the Everlasting Theory shows that there is the upper limit for energy of created gluon balls in baryons. The upper limit follows from the rest mass of the core of baryons (X + Y = 742.4 MeV). Gluon condensate of such rest mass produces particle which rest mass is 171.8 GeV. It is the mass of the top quark (see formulae (214)-(216)). We can see that the QCD should give best results for sliding scale above but close to the mass of 159 the bottom quark but much lower than the mass of the top quark. And it is the known solution for the beta function. But to obtain the running coupling we need one additional parameter i.e. the absolute parameter i.e. the alpha_strong for energy equal to the mass of the Z boson. There must be the minimal subtraction (or QCD scale) as well to eliminate the big values of the running coupling. Universe-antiuniverse pairs: Similarly as particles, also universes arise as the universeantiuniverse pairs. The baryon-antibaryon symmetry was broken already before the ‘soft’ big bang (i.e. after the period of inflation). In Einstein spacetime (its ground state consists of nonrotating-spin binary systems of neutrinos) arise the left- and right-handed vortices as the vortex-antivortex pairs. The Protoworld associated with our Universe was left-handed. Such internal helicity have neutrons, therefore, in the left-handed vortex appeared the protogalaxies composed of neutron black holes. Evolution of the Protoworld leads to dark energy. Inflows of dark energy into protogalaxies caused their exits from the black-hole states. There is gravitational attraction between our Universe and its antiuniverse. Virtual particles: In contrary to the real photons they have mass not equal to zero. For massless particles, the coupling constants are equal to zero because such particles cannot create gradients in the spacetimes and other fields (they ‘slide’ along a field). Virtual photons are the objects composed of non-rotating-spin binary systems of neutrinos. When mean mass density of a virtual photon is lower than the mean mass density of the Einstein spacetime then its mass is negative. When such density is higher then mass is positive. Mass of a ‘hole’ in the Einstein spacetime (i.e. of a region with lower mass density than the mean density) is negative and imaginary because the lacking mass has broken contact with real particles. This means that the negative mass is defined as –im, where i = sqrt(-1). This definition leads to the negative square of mass of the ‘hole’ (-im) 2 = -m 2 . A vortex of massless energy E has mass m = E/c 2 i.e. the total energy is 2E. This means that in the field of a particle there can arise simultaneously the bare virtual particle-antiparticle pair(s) that total positive mass is two times greater than the bare mass of the real particle. For example, in the electromagnetic field of a resting electron simultaneously can be produced only one virtual bare electron-positron pair. Weak dipoles: These are binary systems of neutrinos i.e. the neutrino-antineutrino pairs. The neutrinos carry the weak charges. Weak charge: This is the torus of neutrinos. It looks as a miniature of the electric charge of proton. They consist of the binary systems of the closed strings. On surface of the torus of neutrinos, arise the small loops. Their radii are 2π or 2π/3 times greater than the radius of the equator of the torus of neutrinos. The small loops are responsible for the short and longdistance entanglement of particles. The binary closed strings a neutrino consists of suck up the tachyons from some volume. This leads to the short-distance weak interactions. The mass responsible for the weak interactions of baryons in the low-energy regime is the point mass inside the core of baryons – its mass is Y = 424.1 MeV. It is relativistic object so it can produce the W and Z bosons as well. Weak interactions: Volumes filled with additional binary systems of neutrinos interact weakly. Weak interactions are due to the exchanges of such volumes. Surfaces of volumes interacting weakly should be in distance equal to or smaller than 3482.87 times the external Yang-Mills existence: The confinement, mass-gaps and asymptotic freedom described within the Everlasting Theory are the foundations of the Yang-Mills acting correctly. Asymptotic freedom in the Everlasting Theory acts as follows. The components of the pions (the large loops) arise due to the entanglement and confinement inside the torus of the core of baryons as a closed loop composed of the Einstein spacetime components. The Einstein spacetime components are moving with the speed of light c. During acceleration of a baryon, 160 due to the constancy of the c, the spin speed of the closed loop decreases i.e. their lifetime, defined by the spin speed, increases. On the other hand, from the Uncertainty Principle follows that when lifetime increases then energy of the closed loop decreases i.e. during acceleration of the baryon, energy of carriers of the strong interactions decreases i.e. value of the running coupling for the strong interactions decreases as well. We can see that the carriers of the strong interactions behave out of accord with the Einstein formula for the relativistic mass. Such behaviour follows from the structure of the core of baryons, the Uncertainty Principle and the coupling of the core of baryons with the Einstein spacetime. In the highenergy regime there appears the asymptote for the alpha_strong 0.1139. Confinement in the Everlasting Theory acts as follows. To explain the confinement we need two parallel spacetimes. The two long-distance interactions, i.e. gravity and electromagnetism, lead to the two parallel spacetimes. The Einstein spacetime components suck in the components of the more fundamental Newtonian spacetime (it is the modified Higgs field) and due to the internal helicity of the closed strings the components consist of, they transform the chaotic motions of the tachyons into the jets. It causes that there arises the negative pressure in the more fundamental spacetime inside and near the Einstein spacetime components. This means that in the non-perturbative regime, there appears the attraction between the Einstein spacetime components when they are sufficiently close one to another. But such states are very unstable. The confinement is possible in each place of the two parallel spacetimes and concerns the zero-energy photon- and gluon-fields as well. Mass gaps in the Everlasting Theory arise as follows. To describe the mass gaps we need additional phenomena which stabilize the confinement. For example, we need the phenomena characteristic for the core of baryons: the Einstein spacetime components trajectories, i.e. the binary systems of neutrinos cross the centre of the core so in the centre their number density is higher. There appears the ball composed of the confined carriers of the gluons so we can call it the gluon ball. We can see that there can be in existence balls composed of zero-energy gluons as well. There is not increase in mass of the Einstein spacetime components. There increases a little the mass density of the local spacetime i.e. the mass gaps are associated with the density, not with the individual components. We cannot detect the not-rotating-spin binary systems of neutrinos. It is because the Lagrangian of the ground state of the Einstein spacetime is today always constant. Mass gaps follow from confinement but there are needed processes which stabilize the confinement. Outside the strong fields, the gluons behave as photons. It is because the carriers of gluons and photons, i.e. the Einstein spacetime components, and the strong fields have internal helicity whereas the electromagnetic field has not. 161 More magazines by this user Similar magazines
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9542118906974792, "perplexity": 1809.0643724484166}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574265.76/warc/CC-MAIN-20190921043014-20190921065014-00526.warc.gz"}
http://mathhelpforum.com/geometry/157724-triangle-centers-proof.html
# Math Help - Triangle Centers Proof 1. ## Triangle Centers Proof Prove that a triangle is equilateral if and only if its centroid and circumcenter are the same point. 2. One way is pretty easy. As for the other way, we assume that the centroid and circumcenter are the same point, say, G. Let D be the midpoint of BC. Since D and G are two common points of the median and perpendicular bisector corresponding to BC, they must be the same line. Can you complete the proof ?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687937259674072, "perplexity": 308.27434092724843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097861.54/warc/CC-MAIN-20150627031817-00136-ip-10-179-60-89.ec2.internal.warc.gz"}
https://astarmathsandphysics.com/a-level-maths-notes/m2/3617-non-uniform-rod-held-in-horizontal-equilibrium-by-cables-at-either-end-at-different-angles-to-vertical.html?tmpl=component&print=1&page=
## Non Uniform Rod Held in Horizontal Equilibrium By Cables at Either End at Different Angles to Vertical It is not sufficient to maintain an equilibrium that forces balance in each direction. There must also be no turning force about any point. The following diagram shows a non - uniform rod of weight 100 N held horizontal by tensionsandin cables attached to either end of the rod. The centre of gravity of the rod is a distance x froim end A. Resolving vertically gives Subtracting these, and using thatgives Then
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8118855357170105, "perplexity": 968.7345698940381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944677.39/warc/CC-MAIN-20180420174802-20180420194802-00374.warc.gz"}
https://www.physicsforums.com/threads/movement-of-point.40076/
# Movement of point 1. Aug 21, 2004 ### TSN79 I have a structure that looks something like this; a steel-pole is vertical for 2m and then horizontal for 1m, like an uppside-down L-shape. At the end of the horizontal part, a force acts downwards (10kN). How do I go about to find both the horizontal and vertical movement of the point where the force acts? 2. Aug 21, 2004 ### Clausius2 You have to use the Navier-Bresse equations. If you don't know them, say you don't and I will explain you the calculus. The shape you describes is very simple, so you will not spend much effort in solving it. First of all, you must work out the flector's distribution of the structure. When you have this distribution, insert it in N-B equations. There are two theorems, first and second Mohr theorems that surely will help you much. If you don't know what flector's distribution means (maybe this word does not exist in english), put it across next thread, please. 3. Aug 21, 2004 ### enigma Staff Emeritus Is the structure free to rotate, or is it fixed somewhere and you're measuring deflection? 4. Aug 23, 2004 ### TSN79 Is this the Navier-Bresse equation you mentioned? $$\delta=\frac{FL}{AE}$$ Because I have a pretty good idea that this is used in the solution, where F is a force, L is length, A is area, and E is a constant for steel. Delta is the movement I think. Flector's distribution I have not heard of, perhaps we call it something else in norwegian. By the way, the structure is fixed to the ground at the end of the vertical part. Explain please...? 5. Aug 23, 2004 ### Tom Mattson Staff Emeritus In every problem of this type, your solution must contain the following 3 ingredients: 1. Equilibrium You must write down the equilibrium equations, using Newton's second law. 2. Force-Deformation Relations That's the relationship between F and &delta; that you just wrote down in your last post. Now those two are easy. The tricky part is the third ingredient. 3. Compatibility These deforming members are fighting over a fixed amount of space. If one bar pushes &epsilon; units to the right, the other bar must give by moving &epsilon; units to the left. You should let the displacement of each member be a vector such as ui+vj, where u and v are independent variables, and use right triangle trigonometry on the deformed member. Last edited: Aug 23, 2004 6. Aug 24, 2004 ### Clausius2 See this structure: _______C | B | | | A A=point ground-clamped (ground fixed without posibility of rotation); B=welding point between two girders; C=force exerted point. A force F is exerted downwards in this point. Lenghts: AB=L1; BC=L2; All right, I'm going to solve this problem: i) first of all we are going to calculate the bending moment distribution M(in spanish it is said "momento flector"). Force Reactions: VA=vertical reaction in A (pointing upwards). HA=horizontal reaction (pointing rightwards) in A; MA=moment reaction in A (turning anticlockwise); VA=F; HA=0; MA=F*L2; ok? So that bending moments are M=MA in point A, M=MA in point B; and M=0 in point C. You should see bending moment is constant along AB, and linear along BC. ii) Navier-Bresse equations: Horizontal movement in C: $$\overline{u_{c}}=\int{\frac{M}{EI}sds}$$ where E=Young modulus; I=section's inertia moment; s=doesn't matter. You can employ 2nd Mohr theorem in order to solve this integral. Pay attention: Take the bending distribution along AB. It's rectangular shaped isn't it?. Take the centroid of this distribution, namely G. It's trivial to see it's located at the middle point of AB. Proyect it over the girder AB. And then, join together points G and C with a straight line. The segment normal to this last line will be the tangent of the trayectory of point C due to ONLY AB bending moment distribution. You can draw a vector over this last line (it will point to right down side) to see spatially the path of point C. The horizontal component of this vector will be Uc. How is it calculated?. By handling the last equation: $$\overline{v}=\sum(\frac{A_{i}}{EI}(d(G_{i}U)\overline{e_{x}}+d(G_{i}V)\overline{e_{y}}))$$ This equation is all what you need. The sum sweeps i=1,2 because of two bending moment distributions. A= area of each bending moment distribution (one is rectangular and the other one is triangular). d(GV)=distance from each centroid calculated as stated before and V, a line which goes trough C point in vertical direction (y direction) d(GU)=distance from each centroid and U, a line which goes trough C in horizontal direction (x direction). e=unitary vector. v=movement vector. My solution is: $$\overline{v}=\frac{F L_{1} L_{2}}{EI}(0.5L_{1}\overline{e_{x}}-L_{2}\overline{e_{y}})+\frac{F L_{2}^2}{3EI}(-L_{2}\overline{e_{y}})$$; Anyway, you are solving an elastic body. If you don't have any knowledge about elastic theory or structural engineering, or you never have heard about N-B equations, then you are endangered fighting against this problem. I advice you to consult any structures book.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8937774300575256, "perplexity": 1747.8312241457174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00254-ip-10-171-10-108.ec2.internal.warc.gz"}
https://tantalum.academickids.com/encyclopedia/index.php/Pure_state
# Pure state The term pure state refers to several related concepts in physics, particularly quantum mechanics and in functional analysis. In quantum mechanics a pure state S of a quantum system is a state represented by a density operator which cannot be decomposed as a randomization of two statistically different statistical ensembles. Mathematically this means S is an extreme point in the set of states. Such states are given in Dirac bra-ket notation by [itex] S = | \psi \rangle \langle \psi | [itex] In the density operator formalism, a pure state is an idempotent transformation, following the properties of projection operators [itex] \rho = \rho^2 [itex] A pure state on a C*-algebra A is a state which is an extreme point of the set of all states on A. By properties of the GNS construction these states correspond to irreducible representations of A. The states of the C*-algebra of compact operators K(H) correspond exactly to the density operators and therefore the pure states of K(H) are exactly the pure states in the sense of quantum mechanics. • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9366622567176819, "perplexity": 361.4433714036104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00300.warc.gz"}
http://fmoldove.blogspot.com/2015/03/the-amazing-so24-continuing-quantionic.html
## The amazing SO(2,4) Continuing the quantionic discussions, let's see how the Lie algebra so(2,4) arises naturally in this framework. First in quantum mechanics the anti-Hermitean operators form a Lie algebra and the classification of Lie algebras was obtained by Elie Cartan. But in quantum mechanics there are additional relationships obeyed by the anti-Hermitean operators because if we multiply them with the imaginary unit we get a Jordan algebra and between them there is a compatibility relationship. This compatibility relationship restricts the possible Lie algebras and as expected one gets the unitary algebras su(n). However there is also an exceptional non-unitary solution: so(2,4). It is too complicated to follow this line of argument, and instead I want to present a more elementary argument (also due to Emile Grgin) which arrives at so(2,4). With a bit of hindsight we start from a general SO(p,q) space which as a linear space is spanned by p-positive basis vectors: $$e_1 , e_2 , \dots , e_p$$ and by q-negative basis vectors: $$e_{p+1} , e_{p+2} , \dots , e_{p+q}$$ . Then we want investigate arbitrary reflections. Why? Because we are after obtaining non-standard ways to represent sqrt(-1) using the elements of SO(p,q) which is the key to the hermitian-anti-hermitan duality in quantum mechanics. In complex numbers if we consider the complex conjugation we can represent that as a reflection on the real axis, and this is a good hint. We are interested in continuous transformations only to the extent that they can undo a reflection. If we can find a unique reflection this can form a realization of the observables-generators duality in quantum mechanics. Now consider a reversal of $$r$$ arbitrary basis vectors in $$e_1 , e_2 , \dots , e_p$$. If $$r$$ is even the transformation can be undone by rotations because the determinant of the transformation is positive. Similarly all reversals for $$r$$ odd are equivalent. In general we can have $$r$$ inversions in the positive basis vectors and $$s$$ inversions in the negative basis vectors: $$J=s+r$$. Therefore in general there are $$K=n-J$$ invariant basis vectors. Let us now rename the basis vectors as: $$R_1, R_2, \dots , R_J , S_1, S_2, \dots , S_K$$ (R for reverse, and S for same). Then there are 3 kinds of bivectors: $$R_i \wedge R_j$$ $$S_i \wedge S_j$$ $$R_i \wedge S_j$$ The first two kinds do not change sign, but the last kind does. Let us introduce two more numbers: N = number of bivectors of kind $$R_i \wedge S_j$$ P = number of bivectors of kind $$R_i \wedge R_j$$ + number of bivectors of kind $$S_i \wedge S_j$$ + 1 for the identity transformation N-for negative, P-for positive Then the following relationships hold: N=JK P=1/2 J(J-1) + 1/2 K(K-1)+1 = 1/2 n(n-1) - N + 1 Now r and s must be odd numbers (all even number reflections can be undone by a rotation): r=2k+1 < p s=2l+1 < q and introducing m=k+l as an auxiliary notation we get: N(m) = JK=2(m+1)(n-2m-2) (2(m+1) = J and K = n-J=n-2m-2) Now we need to require that the complex conjugation is uniquely defined. This means that N(m) must have the same value for all the allowed values of m: $$N(0) =N(1)=...=N(m_{max})$$ Because N(m) is quadratic there are only two solutions for m: $$m_{max} = 1$$ and from N(0)=N(1) we get: 2*(n-2)=2*2*(n-4) and so n=6 Therefore we can have the solutions: SO(1,5), SO(2,4), SO(3,3), SO(4,2), SO(5,1) In the 1,5 and 3,3 case $$m_{max} = 2$$ and we cannot have a unique way to define complex conjugation!!! The only remaining case is SO(2,4) (which is isomorphic with SO(4,2)). If we want to generalize the number system for quantum mechanics in a way that respects the tensor product and obtain a non-unitary solution, the only possibility is SO(2,4). Why is this remarkable? Because SO(2,4) is the conformal group of the compactification of the Minkowski space. This is the first hint that ultimately we will get a relativistic quantum mechanics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300830960273743, "perplexity": 585.5405513394451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589557.39/warc/CC-MAIN-20180717031623-20180717051623-00060.warc.gz"}
http://www.maa.org/press/periodicals/loci/joma/logistic-growth-model-inflection-points-and-concavity
# Logistic Growth Model - Inflection Points and Concavity Author(s): Leonard Lipkin and David Smith 1. Which solutions of appear to have an inflection point? Express your conjecture in terms of starting values P(0). [For your convenience, the interactive figure from Part 3 is repeated here. Recall that the vertical coordinate of the point at which you click is P(0) and the horizontal coordinate is ignored.] 1. How is the location of the inflection point (when there is one) related to K? Record your conjecture -- you will check it in the next step. 2. Use your helper application to compute P"(t) in terms of P(t). (Don't forget the Chain Rule.) Use the resulting formula to confirm your preceding answers. 3. What is the significance of the inflection point in terms of population growth rate? Leonard Lipkin and David Smith, "Logistic Growth Model - Inflection Points and Concavity," Convergence (December 2004) ## JOMA Journal of Online Mathematics and its Applications
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8178388476371765, "perplexity": 1290.0516434371455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257832475.43/warc/CC-MAIN-20160723071032-00247-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/probability-density-function.124960/
# Probability Density Function 1. Jun 30, 2006 Not really a homework question, but a problem I don't get nonetheless. The density of fragments lying x kilometers from the center of a volcanic eruption is given by: D(r) = 1/[sqrt(x) +2] fragments per square kilometer. To 3 decimal places, how many fragments will be found within 10 kilometers of the eruption's center? I thought I was supposed to integrate the function from 0 to 100*pi, and in doing so I got 26.294, (I got 2[sqrt(x) - 2*ln(sqrt(x)+2)] when i integrated the function) but the answer was given to me as 70.424. The answer could very well be wrong, but I don't know that it is. What, if anything, am I doing wrong here? 2. Jun 30, 2006 ### abercrombiems02 in cylindrical coordinates the integral of the density gives the distribution. In this case the problem requires integrating over an area thus we have a double integral. In polar form J = int(int(f(r,theta)*r*dr)*dtheta) With the appropriate limits. Then J = int(int(1/(sqrt(x)+2),x) from 0 to 10,theta from 0 to 2pi) The result is simpler because theta does not appear inside the integral. The result is 2*pi*int(1/(sqrt(x)+2),x) from 0 to 10. That should be your answer 3. Jun 30, 2006 ### 0rthodontist I'm not sure what you mean by ,x) abercrombie, but to be clear, the integral is $$\int_{0}^{2 \pi}\int_{0}^{10} \frac{x}{\sqrt{x}+2} dx d\theta$$ because of the jacobian x. Similar Discussions: Probability Density Function
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98362797498703, "perplexity": 837.4028134153809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608633.44/warc/CC-MAIN-20170526032426-20170526052426-00422.warc.gz"}
https://www.physicsforums.com/threads/suprenum-question.102102/
# Suprenum Question 1. Nov 30, 2005 ### Bob19 Hello I have two non-empty sets A and B wich is bounded above by R. Then I'm tasked with proving that $$sup(A \cup B) = max(sup A, sup B)$$ which supposedly means that $$sup(A \cup B)$$ is the largest of the two numbers sup A and sup B. Can this then be written as $$sup(A) < sup(A \cup B)$$ and $$sup(B) < sup(A \cup B)$$ ??? Can this then be proven by showing that $$sup(A) < sup(A \cup B)$$ is true? Or am I totally on the wrong path here?? /Bob Last edited: Nov 30, 2005 2. Nov 30, 2005 ### matt grime correct your tex and just verify the definitions of sup. 3. Nov 30, 2005 ### Bob19 My definition of Supremum is a follows: every non-empty, bounded above subset of R has a smallest upper bound. Then $$sup(A \cup B)$$ has a larger smallest upper bound than sup(A) and sup(B) according to the definition of Supremum ??? Does this prove the given argument in my first post? /Bob p.s. If my idear is true, can this then be proven by taking a number z, which I then prove $$z \in sup(A \cup B)$$ but $$z \notin sup(A)$$ and $$z \notin sup(B)$$ ??? /Bob Last edited: Nov 30, 2005 4. Nov 30, 2005 ### matt grime why are you treating sup as a set (and taking elements in it?). Sup is not a set, it is an element of R. Sup of a set is the least upper bound (when it exists) obivously the least upper bound of AuB is the max of the least upper bounds, but you need to verify it, ie show it is an upper bound, and show it is the least upper bound. The first is easy, the second slightly harder. 5. Nov 30, 2005 ### Bob19 Okay those two aspects then prove the argument that sup(AuB) = max(sup A, sup B) ???? /Bob 6. Nov 30, 2005 ### matt grime As I explained to someone else earlier tonight, I can easily see the answer becuase of experience, YOU need to demonstrate that you understand the answer by not having to let me fill in any blanks. If you don't see that an argument proves something then YOU need to do some work to rectify that, not me. Similar Discussions: Suprenum Question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988602876663208, "perplexity": 974.8397394268962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102663.36/warc/CC-MAIN-20170816212248-20170816232248-00399.warc.gz"}
http://tex.stackexchange.com/questions/52424/how-can-latex-code-in-a-data-file-be-read-by-pgfplotstable
How can LaTeX code in a data file be read by pgfplotstable? I would like to place some LaTeX math code in the columns of a file to be read by pgfplotstable. Unlike How to use underscores with pgfplotstable?, I would like the columns to be formatted so LaTeX will interpret the subscripts and symbols that it contains. For example, I have a data file that looks like: {Time} {$\theta$} {Fit to $C_a \tan 3\theta$} 0 0.2 0.195 ... Every time I try to read a file like this with pgfplotstable, I get an error: ! Missing \endcsname inserted. \mathop l.101 } ^^M Is it possible to have the formatting embedded in the column headings of a data file? If so, how? - In what context to you want to use the LaTeX code contained in the tables? In a table printed using \pgfplotstabletypeset, or as a legend in a PGFplots plot? –  Jake Apr 18 '12 at 15:05 @Jake: Let's say I'd like to use it in a legend. I figure if I change how the column is generated, I'm more likely to change the column name than the LaTeX code generating the plot. –  sappjw Apr 18 '12 at 16:18 pgfplotstable assumes that column names do not contain expandable material. Its way of dealing with "display names" is to provide the display names explicitly using the column name key which is used during \pgfplotstabletypeset. Protecting expandable material in column names is unsupported. It is unlikely that pgfplotstable will support such protection automatically in the future (because protection cannot be done during \edef which is a common use case of access to columns). So, the answer is: no, this is generally unsupported. Period. You can hack the processing such that it does basic escaping - on your own risk. You should skip this section unless you know what you are doing and you know why you are doing it. Do not blame me. This here would work if you wanted to typeset the table: \let\ESCAPE=\string \pgfplotstabletypeset[ {Time} {$\ESCAPE\theta$} {Fit to $C_a \ESCAPE\tan 3\ESCAPE\theta$} It is based on the fact that \string\macro yields the string sequence "\ m a c r o" . Clearly, you would need to change the header of your input table for such an approach. Would \unexpanded help? –  egreg Apr 19 '12 at 21:55 @egreg thanks for the hint. It does not seems so (and neither does \noexpand). –  Christian Feuersänger Apr 19 '12 at 22:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804737389087677, "perplexity": 1466.9758048982567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661733.69/warc/CC-MAIN-20150417045741-00106-ip-10-235-10-82.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/49497/seteropere?tab=summary
seteropere Reputation 280 Top tag Next privilege 500 Rep. Access review queues 2 10 Impact ~4k people reached • 0 posts edited ### Questions (46) 2 lower bounds on the number of directed acyclic graphs with $n$ vertices 2 What is the meaning of $n\log^2(n)$ 2 Is there a general formula for the chromatic polynomial of power set graphs? 2 The width of a power set graph and its orientations 2 How many possible DAGs are there with $n$ vertices ### Reputation (280) This user has no recent positive reputation changes This user has not answered any questions ### Tags (24) 0 graph-theory × 21 0 lattice-orders × 3 0 order-theory × 17 0 polynomials × 3 0 combinatorics × 8 0 trees × 2 0 elementary-set-theory × 5 0 algebra-precalculus × 2 0 combinations × 3 0 asymptotics × 2 ### Accounts (22) Academia 8,237 rep 12671 Computer Science 330 rep 1212 Mathematics 280 rep 210 Theoretical Computer Science 252 rep 1212 TeX - LaTeX 249 rep 26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8678014278411865, "perplexity": 4191.301801354872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00188-ip-10-164-35-72.ec2.internal.warc.gz"}
https://collaborate.princeton.edu/en/publications/foreground-mismodeling-and-the-point-source-explanation-of-the-fe
# Foreground mismodeling and the point source explanation of the Fermi Galactic Center excess Malte Buschmann, Nicholas L. Rodd, Benjamin R. Safdi, Laura J. Chang, Siddharth Mishra-Sharma, Mariangela Lisanti, Oscar Macias Research output: Contribution to journalArticlepeer-review 9 Scopus citations ## Abstract The Fermi Large Area Telescope has observed an excess of ∼GeV energy gamma rays from the center of the Milky Way, which may arise from near-thermal dark matter annihilation. Firmly establishing the dark matter origin for this excess is however complicated by challenges in modeling diffuse cosmic-ray foregrounds as well as unresolved astrophysical sources, such as millisecond pulsars. Non-Poissonian template fitting (NPTF) is one statistical technique that has previously been used to show that at least some fraction of the GeV excess is likely due to a population of dim point sources. These results were recently called into question by Leane and Slatyer (2019), who showed that a synthetic dark matter annihilation signal injected on top of the real Fermi data is not recovered by the NPTF procedure. In this work, we perform a dedicated study of the Fermi data and explicitly show that the central result of Leane and Slatyer (2019) is likely driven by the fact that their choice of model for the Galactic foreground emission does not provide a sufficiently good description of the data. We repeat the NPTF analyses using a state-of-the-art model for diffuse gamma-ray emission in the Milky Way and introduce a novel statistical procedure, based on spherical-harmonic marginalization, to provide an improved description of the Galactic diffuse emission in a data-driven fashion. With these improvements, we find that the NPTF results continue to robustly favor the interpretation that the Galactic Center excess is due, in part, to unresolved astrophysical point sources across the analysis variations that we have explored. Original language English (US) 023023 Physical Review D 102 2 https://doi.org/10.1103/PhysRevD.102.023023 Published - Jul 15 2020 ## All Science Journal Classification (ASJC) codes • Physics and Astronomy (miscellaneous)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087126851081848, "perplexity": 2579.157382855473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556133.92/warc/CC-MAIN-20210624141035-20210624171035-00070.warc.gz"}
https://depositonce.tu-berlin.de/items/812b7433-5772-4d36-b8dc-d6300b2835d4
# Visual Measurement System for Wheel−Rail Lateral Position Evaluation ## FG Schienenfahrzeuge Railway infrastructure must meet safety requirements concerning its construction and operation. Track geometry monitoring is one of the most important activities in maintaining the steady technical conditions of rail infrastructure. Commonly, it is performed using complex measurement equipment installed on track-recording coaches. Existing low-cost inertial sensor-based measurement systems provide reliable measurements of track geometry in vertical directions. However, solutions are needed for track geometry parameter measurement in the lateral direction. In this research, the authors developed a visual measurement system for track gauge evaluation. It involves the detection of measurement points and the visual measurement of the distance between them. The accuracy of the visual measurement system was evaluated in the laboratory and showed promising results. The initial field test was performed in the Vilnius railway station yard, driving at low velocity on the straight track section. The results show that the image point selection method developed for selecting the wheel and rail points to measure distance is stable enough for TG measurement. Recommendations for the further improvement of the developed system are presented.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586869239807129, "perplexity": 1523.872278245011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00190.warc.gz"}
http://tex.stackexchange.com/tags/chemfig/hot?filter=all
# Tag Info 18 The following examples uses that the tikz code for the arc uses a center node named arccenter. The tikz option argument for the \draw command of the arc can be used with option late options to put a label in the center: \documentclass{article} \usepackage{chemfig} \begin{document} \chemfig{ N**[0,-144,dash pattern=on 2pt off 2pt, late ... 16 Here are the fish you wanted, that I caught with texdoc chemfig. \documentclass{article} \usepackage{chemfig} \begin{document} \setatomsep{2em} \setdoublesep{.3em} \renewcommand*\printatom[1]{\ensuremath{\mathsf{#1}}} \chemfig[line width=1pt] { HO-*6(-=-(-(-[::90]CH_3)(-[::-90]CH_3)-*6(-=-(-OH)=-=))=-=) } \end{document} Now your turn to fetch the ... 13 This is not fully automatic (you need to specify the rotation for the molecule and the corresponding \chemmove command) and the TikZ code can most likely be improved but it may be a start. It places two invisible bonds where I've marked the respective ends with chemfigs @{<node name>} syntax. They are used to draw the rectangle later. ... 12 Pershaps, the chemfig package may help you though it relies on tikz? Here is the first reaction: \documentclass{minimal} \usepackage{chemfig} \begin{document} text before \schemestart \setlewis{4pt}{}{red}\Lewis{0:,\chemfig{@{a}\textcolor{red}{A}}}\hskip4pt\chemfig{@{b}\textcolor{black!40!green}{B}}% \arrow \Lewis{0.,A}% \+\Lewis{4.,B}% ... 12 This answer was provided to me by Christian Tellechea in private correspondence; I post it here for the benefit of other people who may be interested. \documentclass{minimal} \usepackage{chemfig,xstring} \makeatletter % "\derivesubmol" defines the new #1 submol, obtained by replacing all the % occurrences of "#3" by "#4" in the code of #2 submol % ... 12 Yes chemifg is a great tool. But as well as almost every code to picture system the syntax is not trival. Please consider the following example. You can easily see, that chemfig syntax follows a logical and human readable syntax, but will become extremely complex for larger structures. And so far as i can see chemfig is the easiest system for chemical ... 11 I would propose to insert a small invisible bond: \documentclass{article} \usepackage{chemfig} \begin{document} \chemfig{*6(-(-[,.1,,,draw=none\Lewis{0.,})=-=-=)} \end{document} If you need it more often it is probably a good thing to define a suitable submol: \documentclass{article} \usepackage{chemfig} \definesubmol{e}{-[,.1,,,draw=none]} ... 11 Here are a different versions using different chemistry packages. Which one you want to use is up to you... chemfig, provides \startscheme, \stopscheme, \chemfig, \lewis, \Lewis, \chemname ... ; mhchem, provides \ce{}; chemformula (part of the chemmacros bundle), provides \ch{} with the !(<below>)(<formula>) syntax and \chlewis; bohr, provides ... 10 I introduce \lewis with 7 arguments, and I apologize that I don't know the precise naming conventions for chemistry. (EDITED to specify valence in only one place). Arguments: #1 Core atom #2 Top electrons #3 Right electrons #4 bottom electrons #5 left electrons #6 valence #7 inner electron shells \documentclass{article} ... 10 Something like this? MWE: \documentclass{article} \usepackage{chemfig} \begin{document} \chemfig{P(=[2,0.7]O)(-[:-30,0.8]C_{5}H_{11})(-[:150,0.8]C_{5}H_{11}O)(-[:210,0.8]C_{5}H_{11}O)} \end{document} To specify an angle you have to use the notation [:<angle>], while to specify a custom length for the bonds you have to use [,<length>], so ... 10 An example for this is given in the documentation of chemfig (part III section 12.5 Draw polymer element). There a macro \makebraces(<dim up>,<dim down>){<subscript>}{<opening node name>}{<closing node name>} is defined that exploits chemfig's @{<node name>,<pos>} syntax inside formulas to position delimiters on a ... 9 I've never written anything with chemfig before so had to just guess at the syntax. This is basically my comment above made into an example and expanded a little. There are two ways to achieve the effect of one line going over another. One way is for the under line to "know" that it is the under line and break itself at the crossing point (this is what I ... 9 \centerline{ isn't really a latex command (it is in the latex format but just escaped from plain TeX). It makes a box \hsize wide but does not know about latex list structures and the indentation they introduce. So you line is too wide by 25pt which will be the left margin of the list item. Just remove \centerline and replace it with a ... 8 The chemmacros package offers s,p and sp-hybrid orbitals (based on TikZ), d orbitals are missing (for now). These are examples from the documentation: \documentclass{article} \usepackage{chemfig,chemmacros} \begin{document} \setbondoffset{0pt} \chemsetup[orbital]{ overlay , opacity = .75 , p/scale = 1.6 , s/color = blue!50 , s/scale = 1.6 ... 8 I'm guessing the \circleatom is a copy of what I used here. I vaguely remembered having defined something like it before. The middle of an -U> arrow is a node named Uarrow@arctangent. If you have only one such arrow you can use that fact to write something beneath it with \chemmove: \documentclass{article} \usepackage{chemfig} ... 8 I would write this: \chemfig{A -[:-45]B -[:180,0.75]C -[:45,,,,preaction={draw=white, -,line width=6pt}]D } 8 chemfig allows to add explizit node names to either bonds or atoms in its formulae by using the @{<name>} syntax. These names can be used in a tikzpicture with the options remember picture, overlay to draw the curved arrows. chemfig provides the wrapper \chemmove for this. So a combination of chemfig and TikZ can be used to draw the schemes. (BTW: the ... 7 The \chemfig command has two optional arguments. The manual says this: The \chemfig command takes two optional arguments; their syntax is as follows: \chemfig[<opt1>][<opt2>]{<molecule code>} The first optional argument <opt1> contains tikz instructions which will be passed to the tikzpicture environment in which the ... 7 I'd use an invisible bond pointing to the center of the ring (with a relative angle) to place the plus. Something like (-[::126,,,,draw=none]\oplus), possibly scaled a bit. On the other hand I do like Heiko's answer better than mine :) \documentclass{article} \usepackage{chemfig} \begin{document} \chemfig{ R-[:36]N **[216,360,dash pattern=on 2pt off ... 7 Here are two ideas: \documentclass{article} \usepackage{chemfig} \definesubmol{ring}{(-[::-60]=^[::60]-[::60])=_[::60]-[::-60]=_[::-60]} \definesubmol{ring2}{(**6(------))-[,,,,draw=none]-[,,,,draw=none]} \begin{document} \chemfig{-!{ring}-!{ring}-!{ring}} \chemfig{-!{ring2}-!{ring2}-!{ring2}} \end{document} 7 I actually figured this issue out by digging through the 83 pages of the chemfig manual. Here's the code I've changed and the following result: \chemleft[\chemfig{\lewis{,H}\pol{+}-\lewis{2:,O}\pol{-}(-[6]\lewis{,H}\pol{+})-\lewis{,H}\pol{+}}\ind\ind\chemright]^{+} 7 I've contacted Christian Tellechea, the maintainer of the chemfig package. Hi Christian, there is currently a discussion about bound joints in chemfig on tex.stackexchange: Ugly bond joints in chemfig Are you aware of that problem? Is it a chemfig-problem or a TikZ problem? I would really appreciate it if you could participate in the ... 7 You had a syntax error with a final } bracket missing on the last compound. Apart from that to put a label on an arrow use the general form \arrow{->[up][down]} To colour atoms, use {\color{red}H}. However, in a number of situations, such as ends of bonds, you need to write {}|{\color{red}H} instead. \documentclass[12pt,letterpaper]{report} ... 7 To avoid breaking the lines, create an empty bond and attach it To raise or lower an argument of \chemabove or \chembelow, use the optional argument The chemfig manual recommends \definesubmol\nobond{[,0.2,,,draw=none]} in 12.2 Add a superscript without modifying a bond and using this: \documentclass{minimal} \usepackage{chemfig} ... 7 \chemabove and \chembelow may help : \documentclass{article} \usepackage{chemfig} \newcommand*\circleatom[1]{\tikz\node[circle,draw,fill=green!30]{\printatom{#1}};} \begin{document} \setlewisdist{4pt} \schemestart \chemname{% \chemfig{% H_{2} -[:-90]\lewis{0:,N} -[:225]H_{2}C -[:-45]C(=[:-90]O) ... 6 I'll answer under the assumption that you're the one who emailed me the other day about Missing bond type in chemical rendering packages? and to whom I suggested the following definition of the submol {Rsubst}: \tikzset{ subst/.style={shorten <= 10pt,preaction={draw=white,line width=6pt}} } \definesubmol{subst}{-[,-1,,,draw=none]-[::-30,1.5,,,subst]} ... 6 ChemFig is a great package but I don't believe that you'll gain much in time if you create your schemes with it rather than with ChemDraw. Once you know you're way around ChemFig you're just as fast or slow with it than with ChemDraw (supposing you know your way around that, too), at least that's my experience. There are other points you should consider: ... 6 It is not possible to draw such schemes with mhchem but as Twig has already noticed chemfig can be used. To expand a bit on Twig's answer and chemfig's possibilities: the \arrow command is the important macro here which is only valid between \schemestart and \schemestop. It has lots of arguments so here is a quick overview: draw an arrow and call the ... 6 The \chemname command takes an optional argument for vertical spacing: \chemname[<dim>]{\chemfig{<code of the molecule>}}{<name>} However, since your problem is a common one chemfig gives you the possibility to initialize the space. Quoting the manual: In fact, to draw the <name> the command \chemname inserts 1.5ex + the ... 6 I have tried to "fix" the problem. It is not really a "bugfix" (since there is no bug) but a dirty workaround. It seems to work : The beta version needs more testing. If you can't wait (or want to test it), you can download it here. The zip file contains the package source itself (chemfig.tex), a small test file (test.tex) and the pdf manual, compiled ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.938628613948822, "perplexity": 4204.5063465196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451648.66/warc/CC-MAIN-20151124205411-00299-ip-10-71-132-137.ec2.internal.warc.gz"}
https://tohoku.pure.elsevier.com/ja/publications/multistate-network-model-for-the-pathfinding-problem-with-a-self-
# Multistate network model for the pathfinding problem with a self-recovery property Kei Ichi Ueda, Masaaki Yadome, Yasumasa Nishiura 1 被引用数 (Scopus) ## 抄録 In this study, we propose a continuous model for a pathfinding system. We consider acyclic graphs whose vertices are connected by unidirectional edges. The proposed model autonomously finds a path connecting two specified vertices, and the path is represented by a stable solution of the proposed model. The system has a self-recovery property, i.e., the system can find a path when one of the connections in the existing path is suddenly terminated. Further, we demonstrate that the appropriate installation of inhibitory interaction improves the search time. 本文言語 English 32-38 7 Neural Networks 62 https://doi.org/10.1016/j.neunet.2014.08.008 Published - 2015 2 1 • 認知神経科学 • 人工知能 ## フィンガープリント 「Multistate network model for the pathfinding problem with a self-recovery property」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618310928344727, "perplexity": 3097.0911282797583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.52/warc/CC-MAIN-20210730091645-20210730121645-00190.warc.gz"}
https://en.wikipedia.org/wiki/Rayleigh_dissipation_function
# Rayleigh dissipation function In physics, the Rayleigh dissipation function, named for Lord Rayleigh, is a function used to handle the effects of velocity-proportional frictional forces in Lagrangian mechanics. It is defined for a system of ${\displaystyle N}$ particles as ${\displaystyle F={\frac {1}{2}}\sum _{i=1}^{N}(k_{x}v_{i,x}^{2}+k_{y}v_{i,y}^{2}+k_{z}v_{i,z}^{2}).}$ The force of friction is negative the velocity gradient of the dissipation function, ${\displaystyle {\mathbf {F} }_{f}=-\nabla _{\mathbf {v} }F}$. The function is half the rate at which energy is being dissipated by the system through friction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994253933429718, "perplexity": 295.14157594443355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648198.55/warc/CC-MAIN-20180323063710-20180323083710-00062.warc.gz"}
http://math.stackexchange.com/questions/334638/interpolation-result-for-brownian-motion-in-donskers-theorem
# Interpolation result for Brownian Motion in Donskers Theorem Suppose we have an increasing sequence of stopping times $\{\tau_n\}$ such that $\tau_n-\tau_{n-1}$ are iid. Furthermore let $B$ be a Brownian Motion and we define $S_n:=B(\tau_n)$ which gives a random walk. Moreover $$S^n(t):=\frac{1}{\sqrt{n}}\left[\left(t-\frac{k}{n}\right)S_{k+1}+\left(\frac{k+1}{n}-t\right)S_k\right]$$ for $\frac{k}{n}\le t\le \frac{k+1}{n}$ and $k=0,\dots,n-1$. So $S^n$ is the piecewise linear interpolation. Why do we have the following: For every $t\in[\frac{k}{n},\frac{k+1}{n}]$ there is $\nu\in[\tau_k,\tau_{k+1}]$ such that $S^n(t)=\frac{1}{\sqrt{n}}B_{\nu}$? Obviously at the end point of the interval, this is clear. But why is it true for the $t\in(\frac{k}{n},\frac{k+1}{n})$? - This is a direct consequence of the intermediate value theorem: Let $t \in \left[ \frac{k}{n}, \frac{k+1}{n} \right]$, then $$\sqrt{n} \cdot S^n(t) = \left(t- \frac{k}{n} \right) B_{\tau_{k+1}} + \left(\frac{k+1}{n}- t \right) \cdot B_{\tau_k} \in [\min\{B_{\tau_k},B_{\tau_{k+1}}\},\max\{B_{\tau_k},B_{\tau_{k+1}}\}]$$ Since $t \mapsto B(t,w)$ is continuous for almost all $w$, there exists by the intermediate value theorem $\nu(w) \in [\tau_k(w),\tau_{k+1}(w)]$ such that $$\sqrt{n} \cdot S^n(t,w) = B_{\nu}(w)$$ for almost all $w$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994138240814209, "perplexity": 40.25808353237592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.preprints.org/manuscript/201903.0117/v1
Preprint Article Version 1 This version is not peer-reviewed # An Accelerating and Rotating Planck-Hubble Universe (A Very Simple Model of Quantum Cosmology) Version 1 : Received: 8 March 2019 / Approved: 11 March 2019 / Online: 11 March 2019 (08:01:28 CET) How to cite: Seshavatharam, U.V.S.; Lakshminarayana, S. An Accelerating and Rotating Planck-Hubble Universe (A Very Simple Model of Quantum Cosmology). Preprints 2019, 2019030117 (doi: 10.20944/preprints201903.0117.v1). Seshavatharam, U.V.S.; Lakshminarayana, S. An Accelerating and Rotating Planck-Hubble Universe (A Very Simple Model of Quantum Cosmology). Preprints 2019, 2019030117 (doi: 10.20944/preprints201903.0117.v1). ## Abstract With reference to Planck scale, increasing support for large scale cosmic anisotropy and preferred directions and by considering an increasing ratio of Hubble parameter to angular velocity, right from the beginning of Planck scale, we make an attempt to estimate ordinary matter density ratio, dark matter density ratio, mass, radius, temperature, age and expansion velocity (from and about the baby universe in all directions). We would like suggest that, from the beginning of Planck scale, 1) Dark matter can be considered as a kind of cosmic foam responsible for formation of galaxies. 2) Cosmic angular velocity decreases with square of the decreasing cosmic temperature. 3) Increasing ratio of Hubble parameter to angular velocity plays a crucial role in estimating increasing cosmic expansion velocity and decreasing density ratios of dark matter and ordinary matter. 4) There is no need to consider dark energy for understanding cosmic acceleration. ## Subject Areas planck scale; quantum cosmology; critical density; ordinary matter; dark matter; expansion velocity; angular velocity; Hubble’s law;
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8990822434425354, "perplexity": 3199.8628571765294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886121.45/warc/CC-MAIN-20200704104352-20200704134352-00096.warc.gz"}
http://physics.stackexchange.com/questions/101029/why-is-it-that-angular-acceleration-is-constant-in-different-instantaneous-refer
# Why is it that angular acceleration is constant in different instantaneous reference frames? Take the following example: A rod (of length L and mass m) is held horizontally at both ends by supports. One is instantaneously removed. The specific problem is to prove that the force on the other support drops from mg/2 to mg/4, which I proved by first considering the centre of mass as the instantaneous reference frame, and thus considering a rotation around the support. Resolving angular forces: (F = Force at pivot, I = Moment of Inertia = m(L^2)/12, ω = angular velocity) FL/2 = I * dω/dt FL/2 = m(L^2)/12 * dω/dt F = mL/6 * dw/dt (1) Now taking the instantaneous reference frame around the pivot: (I = M(L^2)/3, ω' = angular velocity, force at CoM = mg) mg * L/2 = I * dω'/dt mgL/2 = mL2/3 * dw'/dt dw'/dt = 3g/2L (2) The desired solution can be found by substituting (2) into (1), i.e. by equating dw'/dt and dw/dt. Why is it that this can be done? - What is angular speed? Clearly it is $\frac {v_\perp} {r}$ where symbols have their usual meanings. Rod rotates about its, say, rightmost point, say $O$. We will consider left side as positive $x$-axis. Now consider a point $A$ at distance $r_1$ from it. Let the rod have instantaneous angular speed $\omega$. All points on the rod will have this $\omega$ wrt $O$. Consider a point B at a distance $r_2$ from it, clearly with same $\omega$ wrt $O$. This can be seen by the fact that rate of change of angular displacement is same for all points as $\omega =\frac {d\theta}{dt}$ Assume $r_2 > r_1$ Now consider the point A as frame of reference and let us calculate $\omega$ $'$ which is angular speed of $B$ wrt $A$. Clearly, $v_A=\omega r_1$ wrt ground and that of $B$ is $\omega r_2$. Now calculate $v_\perp$ of $B$ wrt $A$. Clearly, it is $v_b-v_a =\omega(r_2-r_1)$ And distance between $A$ and $B$ is $r_2-r_1$. So, what do you get $\omega$ $'$ ? $\omega$ $' =\frac {\omega(r_2-r_1)} {r_2-r_!} =\omega$ As instantaneous angular speed is same, its rate of change $\alpha$ will also be same. Get it? This is a bit of general answer but applies to your question also. Also you can solve your question by using newton's second law and using $a=\alpha \frac L 2$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9881340861320496, "perplexity": 527.6383141437224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446535.72/warc/CC-MAIN-20151124205406-00032-ip-10-71-132-137.ec2.internal.warc.gz"}
https://drmoazzam.com/matlab-code-bpsk-modulation-and-demodulation-with-explanation?replytocom=995
# BPSK Modulation And Demodulation- Complete Matlab Code With Explanation Binary Phase Shift Keying (BPSK) is a type of digital modulation technique in which we are sending one bit per symbol i.e., ‘0’ or a ‘1’. Hence, the bit rate and symbol rate are the same. Depending upon the message bit, we can have a phase shift of 0o or 180o with respect to a reference carrier as shown in the figure above. For example, we can have the following transmitted band-pass symbols: $S_1=\sqrt{\frac{2E}{T}}\cos{(2\pi f t)}\rightarrow represents \mbox{ }'1'$ $S_2=\sqrt{\frac{2E}{T}}\cos{(2\pi f t+\pi)}\rightarrow represents \mbox{ }'0'$ or $S_2=-\sqrt{\frac{2E}{T}}\cos{(2\pi f t)}\rightarrow represents \mbox{ }'0'$ Where ‘E’ is the symbol energy, ‘T’ is the symbol time period, f is the frequency of the carrier. Using Gram-schmidt orthogonalization, we get a single orthonormal basis function, given as: $\psi_1=\sqrt{\frac{2}{T}}\cos{(2\pi f t)}$ Hence, the resulting constellation diagram can be given as: There are only two in-phase components and no quadrature component. Now, we can easily see that the two waveform of So and S1 are inverted with respect to one another and we can use following scheme to design a BPSK modulator: First the NRZ encoder converts these digital bits into impulses to add a notion of time into them. Then NRZ waveform is generated by up-sampling these impulses. Afterwards, multiplication with the carrier (orthonormal basis function) is carried out to generate the modulated BPSK waveform. Demodulator Design: We do coherent demodulation of the BPSK signal at the receiver. Coherent demodulation requires the received signal to be multiplied with the carrier having the same frequency and phase as at the transmitter. The phase synchronization is normally achieved using Phase Locked Loop (PLL) at the receiver. PLL implementation is not done here, rather we assume perfect phase synchronization. Block diagram of BPSK modulator is shown in the figure below.  After the multiplication with the carrier (orthonormal basis function), the signal is integrated over the symbol duration ‘T’ and sampled. Then thresholding is applied to determine if a ‘1’ was sent (+ve voltage) or a ‘0’ was sent (-ve voltage). The Matlab simulation code is given below. Here for the sake of simplicity,  the bit rate is fixed to 1 bit/s (i.e., T=1 second). It is also assumed that Phased Locked Loop (PLL) has already achieved exact phase synchronization. clear all; close all; %Nb is the number of bits to be transmitted T=1;%Bit rate is assumed to be 1 bit/s; %bits to be transmitted b=[1 0 1 0 1] %Rb is the bit rate in bits/second NRZ_out=[]; %Vp is the peak voltage +v of the NRZ waveform Vp=1; %Here we encode input bitstream as Bipolar NRZ-L waveform for index=1:size(b,2) if b(index)==1 NRZ_out=[NRZ_out ones(1,200)*Vp]; elseif b(index)==0 NRZ_out=[NRZ_out ones(1,200)*(-Vp)]; end end %Generated bit stream impulses figure(1); stem(b); xlabel('Time (seconds)-->') ylabel('Amplitude (volts)-->') title('Impulses of bits to be transmitted'); figure(2); plot(NRZ_out); xlabel('Time (seconds)-->'); ylabel('Amplitude (volts)-->'); title('Generated NRZ signal'); t=0.005:0.005:5; %Frequency of the carrier f=5; %Here we generate the modulated signal by multiplying it with %carrier (basis function) Modulated=NRZ_out.*(sqrt(2/T)*cos(2*pi*f*t)); figure; plot(Modulated); xlabel('Time (seconds)-->'); ylabel('Amplitude (volts)-->'); title('BPSK Modulated signal'); y=[]; %We begin demodulation by multiplying the received signal again with %the carrier (basis function) demodulated=Modulated.*(sqrt(2/T)*cos(2*pi*f*t)); %Here we perform the integration over time period T using trapz %Integrator is an important part of correlator receiver used here for i=1:200:size(demodulated,2) y=[y trapz(t(i:i+199),demodulated(i:i+199))]; end figure; title('Impulses of Received bits'); xlabel('Time (seconds)-->'); ylabel('Amplitude (volts)') If you have any comments or questions, you can discuss them below. ### 16 responses to “BPSK Modulation And Demodulation- Complete Matlab Code With Explanation” 1. Jo says: scatter(); %error!! -> Not enough input arguments…… 2. Jo says: Thank you very much, that is very helpful! but, there is one problem…. scatter(); % error !! -> Not enough input arguments…….. 3. This device can also be fight friendly this means no knife blade to slow you down at airport security. If the integrity of the own servers is compromised with a fire inside your workplace, an electric surge, or one thing else, the files might be destroyed. Cloud figuring out structure enables entire discretion of users data. 4. Imran says: Hi Dr. Moazzam, First of all, thanks for this. I was looking for an easy to understand BPSK Matlab implementation so was glad I found this. I just have a question on the NRZ_out and Modulated plots. Shouldn’t the xlabel be samples instead of time? Because I don’t think the NRZ stream could have changed the frequency of the carrier which is what the plot seems to suggest. Hope to hear from you and thanks again. 5. smokingRooster says: Hi, quick question. You specify a frequency of 5hz…but your graph shows one of about 50. Could you explain why that is? thank you • sepiatone says: That’s just an artifact of the plotting – plot(t, Modulated) and you would see a frequency of 5Hz. 6. Mustafa says: Problem 1. I want to Write a code in Matlab which creates a constant envelop PSK signal waveform that generate for M=8 (M stands for modulation), so that amplitude of the signal can reach up to sqrt(2). I want to Plot a graph which showing that there is no difference except in their phases Problem 2. I want to Write a code in Matlab which will generate a 500 random numbers to represent our symbols; and then divide them into 4 intervals. Whereby each interval corresponds to a symbol A0, A1, A2, A3, then plot a stem of 50 random symbols generated in accordance to the interval division. 7. i have error in scatter plot. while BPSK modulation. whether anyone help me to correct scatterplot. 8. hh says: i don’t understand this part for i=1:200:size(demodulated,2) y=[y trapz(t(i:i+199),demodulated(i:i+199))]; end The symbol duration is 200, here we integrate the demodulated signal with respect to i each of size 200. hope that helps! 9. Arun says: how to make bpsk program run repeatedly? And I am also in need of 2:1 Multiplexer. please help me with the code 10. sam kumar says: this code not giving the propar phase shift . 11. Devika says:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8501376509666443, "perplexity": 2280.85803873921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00512.warc.gz"}
https://www.physicsforums.com/threads/measures-and-alternating-multilinear-forms.280481/
# Measures and alternating multilinear forms 1. Dec 18, 2008 ### mma On an n-dimensional vector space an alternating n-form defines a measure. However a measure can be defined on its own right, without mentioning any alternating form. My question is that what condition must a measure satsfy that it can be originated from an alternating multilinear form. I mean an analogue that of a norm can be originated from a symmetric bilinear form if and only if it satisfies the parallelogram identity. 2. Dec 20, 2008 ### mma I put the question the other way around. We learn in the school that the area of a polygon is determined by the length of its edges and its shape, i.e. the angles between the edges. Length and angles are determinded by the scalar product. So we learn the notion of area based on the scalar product. However, area (or at least the ratio of areas) is determined solely by the linear structure of the vector space. My queston is now, how coul'd we introduce in a natural way the notion or area (or volume, etc.) without using the notion of norm or angles or scalar product ? Of course, the emphasis is on the word natural, because otherwise we could simply say that area is an alternating bilinear form. 3. Dec 20, 2008 ### Hurkyl Staff Emeritus I think the Haar measure is what you're looking for. 4. Dec 20, 2008 ### mma Do you mean that we regard our vector space as the group of translations, and we apply the existence and uniquiness of the Haar-measure on this group? This is interesting. The requirement of the translation invariance alone determines the area or volume function up to multiplication by a positive constant? 5. Dec 20, 2008 ### Hurkyl Staff Emeritus That's what it looks like. Oh, and don't forget the regularity conditions! Although (except for the finiteness condition) I don't know what can go badly if you omit them. 6. Dec 20, 2008 ### mma Yes, of course. These are the "natural" features of a measure, independently of any group or vector space strucrure. And if we add the translation invariance, then the ratio of the areas of two parallelograms is determined unambiguously. Super! Could you show a relatively simple proof for this? 7. Dec 20, 2008 ### Hurkyl Staff Emeritus It depends on what you mean by simple! I haven't pushed it through to completion, but my best lead on a 'simple' proof would be to pick a basis, which thus lets you define a notion of an n-parallelepiped. If you have two translation-invariant, regular measures, then I believe you can show that for any n-parallelepiped whose faces are parallel to the fundamental one, the ratio of the measures is a fixed constant. I presume you can push that through to show their ratio on any measurable set is a fixed constant. 8. Dec 21, 2008 ### gel the only regularity assumption you need is that the measure is finite and nonzero on a given bounded and and nonempty open set. Or, just sigma-finite is enough. For the proof, suppose $\mu$ and \nu[/tex] are two such measures. \begin{align*} \mu(A)\nu(B) &= \int\int 1_{\{x\in A,y\in B\}}d\mu(x)d\nu(y)\\ &= \int\int 1_{\{x-y\in A,y\in B\}}d\mu(x)d\nu(y)\ \ (x \rightarrow x - y)\\ &= \int\int 1_{\{-y\in A,y+x\in B\}}d\nu(y)d\mu(x)\ \ (y\rightarrow y + x)\\ &= \int\int 1_{\{-y\in A,x\in B\}}d\mu(x)d\nu(y)\ \ (x\rightarrow x-y)\\ &= \nu(-A)\mu(B) \end{align*} so, $$\mu(B)/\nu(B) = \mu(A)/\nu(-A)$$ is independent of the set B chosen, and [itex]\mu is proportional to $\nu$. Last edited: Dec 21, 2008 9. Dec 21, 2008 ### Hurkyl Staff Emeritus Ha! That's (roughly) the swap-in-place trick from computer lore. It's really neat to see it useful elsewhere! 10. Dec 21, 2008 ### mma Very tricky! But we arrived to $$\mu(B)/\nu(B) = \mu(A)/\nu(-A)$$ and not to $$\mu(B)/\nu(B) = \mu(A)/\nu(A)$$. So we need additionally suppose the $$\nu(A)=\nu(-A)$$ mirroring invariance too. Or can this also be proved from our original conditions? 11. Dec 21, 2008 ### gel Fix any set A with $\nu(-A)\not=0$ and set $\lambda=\mu(A)/\nu(-A)$. Then, my proof showed that $\mu(B)=\lambda\nu(B)$ for every set B, so $\mu,\nu$ are proportional. So, no you don't need to assume `mirror invariance', but you can prove it with a couple of extra lines (try putting $\mu=\nu,A=B$ into my equation). Last edited: Dec 21, 2008 12. Dec 21, 2008 ### mma Yes, it was evident, sorry for the stupid question. Thank you Gel and Hurkyl!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9898084402084351, "perplexity": 990.4314906547278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514005.65/warc/CC-MAIN-20181021115035-20181021140535-00090.warc.gz"}
http://www.ma.utexas.edu/mp_arc-bin/mpa?yn=96-15
96-15 Exner P., Vugalter S.A. Bound states in a locally deformed waveguide: the critical case (24K, LaTeX) Jan 22, 96 Abstract , Paper (src), View paper (auto. generated ps), Index of related papers Abstract. We consider the Dirichlet Laplacian for a strip in $\,\R^2$ with one straight boundary and a width $\,a(1+\lambda f(x))\,$, where $\,f\,$ is a smooth function of a compact support with a length $\,2b\,$. We show that in the critical case, $\,\int_{-b}^b f(x)\,dx=0\,$, the operator has no bound states for small $\,|\lambda|\,$ if $\,b<(\sqrt{3}/4)a\,$. On the other hand, a weakly bound state exists provided $\,\|f'\|< 1.56 a^{-1}\|f\|\,$; in that case there are positive $\,c_1, c_2\,$ such that the corresponding eigenvalue satisfies $\,-c_1\lambda^4\le \epsilon(\lambda)- (\pi/a)^2 \le -c_2\lambda^4\,$ for all $\,|\lambda|\,$ sufficiently small. Files: 96-15.tex
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963135719299316, "perplexity": 645.9369319938852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807650.44/warc/CC-MAIN-20171124104142-20171124124142-00443.warc.gz"}
https://www.physicsforums.com/threads/what-is-integration-by-parts.762895/
# What is integration by parts 1. Jul 23, 2014 ### Greg Bernhardt Definition/Summary In this article, we shall learn a method for integrating the product of two functions. This method is derived from the 'product rule' for differentiation, but can only be applied to integrate products of certain types. Equations $$\int u dv=uv - \int v du$$ where u and v are functions of one variable; x, say. Extended explanation As, you can see in the equation, it contains two variable, namely 'u' and 'v'. These variables are actually the representation of two functions and thus, the above rule can also be stated as: $$\int f(x) \ g(x) \ dx=~ f(x)\int g(x) \ dx \ -~\int \left[ \ f'(x) \int g(x) \ dx \ \right] \ dx$$ The most important step of initiating such problems would be the determination of u and v from the given function. This can be done by using the following order: L- Logarithmic I- Inverse trigonometric A- Algebraic T- Trigonometric E- Exponential (Or can be remembered as 'LIATE") Thus, out of the two given function, whichever comes first in the above given list can considered as 'u', and hence the other to be 'v'. Implementation of the rule given above can be clearly understood by solving an example. Consider, $$I \equiv \int x cos(x) dx$$ In the above stated example, 'x' is an algebraic expression whereas 'cos(x)' is a trigonometric expression. Thus, 'x' is considered as 'u' and 'cos(x)' is considered as 'v'. Hence, applying the above given rule, that is, $$\int uv dx=~ u\int v dx-~\frac{du}{dx} \int vdx$$ so that $$I= x\int cos(x) dx - \int\left[\frac{d}{dx}x\int cos(x) dx\right] dx$$ $$= x \ sin(x) - \int sin(x) \ 1 \ dx +c_1$$ Because, $$\int cos(x) dx= sin(x)~\text{and}~ \frac{d}{dx}x=~1$$ And finally, $$I = x \ sin(x) + cos(x) + c$$ because $$\int sin(x) dx= -cos(x)$$ * This entry is from our old Library feature. If you know who wrote it, please let us know so we can attribute a writer. Thanks! Can you offer guidance or do you also need help? Draft saved Draft deleted Similar Discussions: What is integration by parts
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004600644111633, "perplexity": 993.6962801814853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813431.5/warc/CC-MAIN-20180221044156-20180221064156-00429.warc.gz"}
https://www.mathway.com/examples/linear-algebra/number-sets/finding-the-set-complement-of-two-sets?id=591
# Linear Algebra Examples Find the Set Complement A \ B , Set up the intersection notation of set and . The intersection of two sets is the set of elements which are in both sets. The complement of the two sets is the remaining members of that aren't members of the intersection of the two sets.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9834295511245728, "perplexity": 272.87303602163405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865595.47/warc/CC-MAIN-20180523102355-20180523122355-00479.warc.gz"}
http://mathhelpforum.com/advanced-algebra/147751-one-linear-algebra-problem-dual-spaces-print.html
# One Linear Algebra Problem (Dual Spaces) • Jun 4th 2010, 09:07 AM eyke One Linear Algebra Problem (Dual Spaces) • Jun 4th 2010, 05:09 PM tonio Quote: Originally Posted by eyke What's the problem? You need three polynomials $p_i(x)\,,\,i=1,2,3$ s.t. $f_i(p_j(x))=\delta_{ij}$. For example, you need that $\int_0^1p_1(x)\,dx=1\,,\,\,\int^2_0p_1(x)\,dx=0\,, \,\int^{-1}_0p_1(x)\,dx=0$ . Putting $p_1(x)=ax^2+bx+c$ , this means : $1=\int^1_0(ax^2+bx+c)dx=\frac{a}{3}+\frac{b}{2}+c$ $0=\int^2_0(ax^2+bx+c)dx=\frac{8a}{3}+2b+2c$ $0=\int^{-1}_0(ax^2+bx+c)dx=-\frac{a}{3}+\frac{b}{2}+c$ Solving the resulting linear system we get $p_1(x)=\frac{3}{2}x^2-5x+3$ . Do the same with the other two polynomials. Tonio
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9707679748535156, "perplexity": 2232.4447747760064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887660.30/warc/CC-MAIN-20180118230513-20180119010513-00382.warc.gz"}
https://infoscience.epfl.ch/record/227177
## Type Soundness Proofs with Definitional Interpreters While type soundness proofs are taught in every graduate PL class, the gap between realistic languages and what is accessible to formal proofs is large. In the case of Scala, it has been shown that its formal model, the Dependent Object Types (DOT) calculus, cannot simultaneously support key metatheoretic properties such as environment narrowing and subtyping transitivity, which are usually required for a type soundness proof. Moreover, Scala and many other realistic languages lack a general substitution property. The first contribution of this paper is to demonstrate how type soundness proofs for advanced, polymorphic, type systems can be carried out with an operational semantics based on high-level, definitional interpreters, implemented in Coq. We present the first mechanized soundness proofs in this style for System F<: and several extensions, including mutable references. Our proofs use only straightforward induction, which is significant, as the combination of big-step semantics, mutable references, and polymorphism is commonly believed to require coinductive proof techniques. The second main contribution of this paper is to show how DOT-like calculi emerge from straightforward generalizations of the operational aspects of F<:, exposing a rich design space of calculi with path-dependent types inbetween System F and DOT, which we dub the System D Square. By working directly on the target language, definitional interpreters can focus the design space and expose the invariants that actually matter at runtime. Looking at such runtime invariants is an exciting new avenue for type system design. Published in: Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages Presented at: POPL 2017, Paris, France, January 15 - 21, 2017 Year: 2017 Publisher: New York, Assoc Computing Machinery Keywords: Other identifiers: Laboratories:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8041759729385376, "perplexity": 2129.168563412506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998462.80/warc/CC-MAIN-20190617083027-20190617105027-00371.warc.gz"}