URL
stringlengths
15
1.68k
text_list
listlengths
1
199
image_list
listlengths
1
199
metadata
stringlengths
1.19k
3.08k
https://ipv4.calculatoratoz.com/en/sanjay-krishna/847335e4-0c39-40b1-881f-94a636b99244/profile
[ "# Calculators Created by Sanjay Krishna", null, "Amrita School of Engineering (ASE), Vallikavu\n299\nFormulas Created\n200\nFormulas Verified\n109\nAcross Categories\n\n## List of Calculators by Sanjay Krishna\n\nFollowing is a combined list of all the calculators that have been created and verified by Sanjay Krishna. Sanjay Krishna has created 299 and verified 200 calculators across 109 different categories till date.\nVerified Absolute temperature for the velocity of sound wave in the isothermal process\nVerified Absolute temperature for velocity of sound wave in terms of adiabatic process\nVerified Velocity along roll axis for small sideslip angle\nVerified Velocity along yaw axis for small angle of attack\nVerified Yawing moment\nVerified Yawing moment coefficient\n13 More Aerodynamic Nomenclature Calculators\nVerified Velocity at Altitude\n6 More Altitude Effects on Power Required and Available Calculators\nCreated Adiabatic wall enthalpy over flat plate using Stanton number\nCreated Coefficient of drag over flat under freestream flow conditions\nCreated Drag force over flat plate\nCreated Free stream enthalpy over flat plate with freestream conditions\nCreated Free stream velocity over flat plate using drag force\nCreated Free stream velocity over flat plate with freestream conditions\nCreated Freestream density over flat plate using Stanton number\nCreated Freestream density over flat under freestream flow conditions\nCreated Freestream Stanton number for flat plate\nCreated Freestream velocity over flat plate using Stanton number\nCreated Local heat transfer over flat plate using Stanton number\nCreated Pressure ratio for insulated flat plate, the strong interaction\nCreated Pressure ratio for insulated flat plate, the weak interaction\nCreated Total enthalpy over flat plate with freestream conditions\nCreated Wall enthalpy over flat plate using Stanton number\nVerified Ratio of Stagnation and Static Density\nVerified Ratio of Stagnation and Static Temperature\n4 More Basic Compressible Flow Calculators\nVerified Coefficient of contraction for sudden contraction\nVerified Difference in liquid level in three compound pipes with same friction coefficient\nVerified Maximum Area of Obstruction in Pipe\nVerified Power Lost ue to Sudden Enlargement\nVerified Power transmission through pipes\nVerified Time taken by pressure wave to travel\n6 More Basic Formulas Calculators\nBasics (2)\nVerified Mach Angle\nVerified Mayer's formula\n6 More Basics Calculators\nBasics (2)\nVerified Thermal Conductance for given Thermal Resistance\nVerified Total Thermal Resistance for Conduction through Two Resistances in Parallel\n3 More Basics Calculators\nCreated Pressure coefficient combined with blast wave for shuttle\nCreated Pressure coefficient combined with blast wave for shuttle at angle of attack\nCreated Pressure coefficient for blast wave theory\nCreated Pressure coefficient for blast wave theory at very high values of mach\nCreated Pressure coefficient for Blunt-nosed cylinder:\nCreated Pressure coefficient for Blunt-nosed plate:\nVerified Frictional force on body A\n5 More Bodies connected by string and lying on rough inclined planes Calculators\nVerified Acceleration of system with bodies connected by string and lying on smooth inclined planes\nVerified Tension in string if both bodies are lying on smooth inclined planes\n2 More Bodies connected by string and lying on smooth inclined planes Calculators\nVerified Acceleration of system with bodies one hanging free, other lying on rough inclined plane\nVerified Coefficient of friction for given tension\nVerified Inclination of plane for given frictional force\nVerified Mass of body B for given frictional force\nVerified Tension in string given coefficient of friction of inclined plane\n2 More Bodies connected by string, one hanging free, other lying on a rough inclined plane Calculators\nVerified Tension in string given coefficient of friction of horizontal plane\n1 More Bodies connected by string, one hanging free, other lying on rough horizontal plane Calculators\nVerified Acceleration of system with bodies one hanging free and other lying on smooth inclined plane\nVerified Angle of inclination for given acceleration\nVerified Angle of inclination for given tension\n1 More Bodies connected by string, one hanging free, other lying on smooth inclined plane Calculators\nCreated Atomic Radius in BCC\nCreated Lattice Constant of BCC\nCreated Total volume of atoms in BCC\nVerified Area of surface for average drag coefficient in boundary layer flows\nVerified Distance from leading edge for Blasius's solution in boundary layer flow\nVerified Distance from leading edge in boundary layer flow\nVerified Length of the plate for Reynold number in the laminar boundary layer flow\nVerified Reynold number at the end of the plate\nVerified Reynold number for drag coefficient in Blasius's solution of boundary layer flow\nVerified Shear stress at the boundary for turbulent boundary layer over a flat plate\nVerified Thickness of boundary layer for Blasius's solution in boundary layer flow\nVerified Thickness of boundary layer in boundary layer flow\nVerified Velocity of fluid for Reynold number in the laminar boundary layer flow\n11 More Boundary Layer Flow Calculators\nCreated Adiabatic wall enthalpy using Stanton number\nCreated Dynamic viscosity around wall\nCreated Enthalpy of wall using Stanton number\nCreated Local heat transfer rate using Nusselt's number\nCreated Local heat-transfer rate calculation using Stanton number\nCreated Local shear stress at wall\nCreated Local skin-friction coefficient\nCreated Nusselt number for hypersonic vehicle\nCreated Nusselt's number with Reynolds number, Stanton number and Prandtl number\nCreated Prandtl number with Reynolds number, Nusselt's number, and Stanton number\nCreated Reynolds number for given Nusselt's number, Stanton number and Prandtl number\nCreated Skin friction coefficient for incompressible flow\nCreated Stanton number for hypersonic vehicle\nCreated Stanton number with Reynolds number, Nusselt's number, Stanton number and Prandtl number\nCreated Static Density equation using skin friction coefficient\nCreated Static density equation using Stanton number\nCreated Static velocity equation using skin friction coefficient\nCreated Static velocity using Stanton number\nCreated Static viscosity relation using temperature of wall\nCreated Thermal conductivity at edge of boundary layer equation using Nusselt's number\nVerified Angle of heel for metacentric height in experimental method\nVerified Meta-centric height in experimental method\nVerified Movable weight for metacentric height in experimental method\n8 More Buoyancy Calculators\nVerified Circulation for a single stagnation point\n3 More Circulation Calculators\nVerified Coefficient of Discharge given Time of Emptying Hemispherical Tank\n4 More Coefficient of Discharge Calculators\nVerified Coefficient of velocity\n2 More Coefficient of Velocity Calculators\nVerified Coefficient of drag for sphere in Oseen formula when Reynolds number is between 0.2 and 5\nVerified Coefficient of drag for sphere in stoke's law when Reynolds number is less than 0.2\n3 More Coefficient's Calculators\nVerified Pressure ratio for maximum flow rate through nozzle or orifice\nVerified Temperature of fluid for stagnation temperature considering compressible fluid flow\n5 More Compressible Flow Calculators\nCreated Emissivity\nCreated Freestream density\nCreated Freestream velocity\nCreated Reference viscosity\nVerified Sidewash angle\n14 More Contribution of Aircraft Components- Part 1 Calculators\nVerified Dynamic pressure at vertical tail for given vertical tail efficiency\nVerified Dynamic pressure at wing for given vertical tail efficiency\nVerified Vertical tail efficiency\n12 More Contribution of Aircraft Components- Part 3 Calculators\nCreated Atomic Packing Factor\nCreated Cutting Force given Thrust Force and Normal Rake Angle\n5 More Cutting Force Calculators\nVerified Total thermal resistance of 2 cylindrical resistances connected in series.\nVerified Total thermal resistance of 3 cylindrical resistances connected in series\nVerified Total Thermal Resistance of Cylindrical wall with Convection on both Sides\n11 More Cylinders Calculators\nVerified Diameter of pipe for difference in pressure in viscous flow\nVerified Diameter of pipe for head loss due to friction in viscous flow\nVerified Diameter of pipe for loss of pressure head in viscous flow\nVerified Discharge in Borda's Mouthpiece Running Free\nVerified Discharge in Borda's Mouthpiece Running Full\nVerified Discharge in Convergent-Divergent Mouthpiece\n3 More Discharge Calculators\nVerified Coefficient of Discharge for Time required to Empty Reservoir\n12 More Discharge Calculators\nVerified Coefficient of Drag for given thrust and weight\nVerified Lift-induced drag coefficient for given required thrust\n5 More Drag during Level Unaccelerated Flight Calculators\nVerified Drag force for a body moving in a fluid of certain density\nVerified Total drag force on a sphere\n5 More Drag Force on Body Calculators\nVerified Actual discharge in venturimeter\nVerified Difference in pressure head for heavier liquid in manometer\nVerified Difference in pressure head for light liquid in manometer\nVerified Velocity at any point for coefficient of pitot-tube\n3 More Dynamics of Fluid Flow Calculators\nCreated Emissivity per unit mole\nCreated Kinetic energy per mole\nCreated Kinetic energy per mole using molar volume\nCreated Kinetic energy per mole using temperature of gas\nCreated Mean free path of single-species gas\nCreated Mean free path using number density\nCreated Molar volume using kinetic energy per mole\nCreated Number density\nCreated Pressure of gas using number density\nCreated Pressure using kinetic energy per mole\nCreated Pressure using molar volume\nCreated Specific gas constant using kinetic energy per mole\nCreated Temperature of gas using Emissivity per unit mole\nCreated Temperature of gas using kinetic energy per mole\nCreated Volume of gas\nVerified Elevator deflection angle for given gearing ratio\n3 More Elevator and Stick Deflection Angles Calculators\nVerified Enthalpy ahead of normal shock from normal shock energy equation\nVerified Enthalpy behind of normal shock from normal shock energy equation\nVerified Velocity ahead of normal shock by normal shock momentum equation\nVerified Velocity behind of normal shock by normal shock momentum equation\nVerified Velocity upstream of shock using Prandtl relation\n6 More Enthalpy and Velocity Calculators\nCreated Atomic Radius in FCC\nCreated Lattice Constant of FCC\nCreated Volume of atoms in FCC\nCreated Adiabatic wall enthalpy for flat plate\nCreated Adiabatic wall enthalpy using recovery factor\nCreated Aerodynamic Heating Equation for Stanton number\nCreated Coefficient of friction using Stanton number for flat plat case\nCreated Drag per unit span\nCreated Prandtl number for flat plate with viscous flow\nCreated Recovery factor calculation using Prandtl number\nCreated Recovery factor for flat plate with viscous flow\nCreated Recovery Factor using temperature\nCreated Skin-friction drag Coefficient\nCreated Skin-friction drag for flat plate in viscous flow\nCreated Stanton number with coefficient of friction\nCreated Static density equation using aerodynamic equation\nCreated Static velocity equation using aerodynamic heating equation\nCreated Total enthalpy in inviscid flow outside boundary layer\nVerified Lift Coefficient for given wing loading and turn radius\n14 More Flight Envelope Calculators\nVerified Bazin's constant\nVerified Chezy's constant considering Bazin formula\nVerified Chezy's constant considering velocity\nVerified Critical depth considering flow in open channels\nVerified Critical depth considering minimum specific energy\nVerified Critical depth considering the critical velocity\nVerified Critical velocity considering flow in open channels\nVerified Discharge per unit width considering flow in open channels\nVerified Hydraulic mean depth considering Bazin formula\nVerified Hydraulic mean depth using Chezy's formula\nVerified Minimum specific energy considering the critical depth\nVerified Velocity of Chezy's formula\n7 More Flow in Open Channels Calculators\nVerified Airspeed measurement by Pitot tube for low-speed incompressible flow\nVerified Lift per unit span by Kutta-Joukowski theorem\n13 More Fundamentals of Inviscid Incompressible flow Calculators\nVerified Control stick length for given gearing ratio\n3 More Gearing Ratio Calculators\nVerified Lift force for given glide angle\nVerified Lift-to-drag ratio for given glide angle\n3 More Gliding Flight Calculators\nVerified Head available at the base of the nozzle\nVerified Total head at the inlet of pipe for head available at the base of the nozzle\nVerified Total head available at inlet of pipe for efficiency of power transmission\nVerified Hinge moment coefficient for given stick force\n2 More Hinge Moment Calculators\nCreated Blunt-nosed flat plate pressure ratio (first approximation)\nCreated Boltzmann constant for cylindrical blast wave\nCreated Coefficient of drag equation using energy released from blast wave\nCreated Creation pressure for planar blast wave\nCreated Energy for blast wave\nCreated Modified Energy for cylindrical blast wave\nCreated Modified pressure equation for cylindrical blast wave\nCreated Modified Radial coordinate equation for cylindrical blast wave\nCreated Pressure for cylindrical blast wave\nCreated Pressure ratio for blunt cylinder blast wave\nCreated Pressure ratio for blunt slab blast wave\nCreated Radial coordinate for planar blast wave\nCreated Radial coordinate of blunt slab blast wave\nCreated Radial coordinate of cylindrical blast wave\nCreated Simplified pressure ratio for blunt cylinder blast wave\nCreated Time required for blast wave\nCreated Blunt-nosed radial coordinate flat plate (first approximation):\nCreated Forces acting on body along flight path\nCreated Forces acting Perpendicular to body on flight path\nCreated Pressure ratio of Blunt-nosed cylinder (first approximation):\nCreated Radial coordinate of Blunt-nosed cylinder (first approximation):\nCreated Radius for cylinder-wedge body shape\nCreated Radius for sphere-cone body shape\nCreated Radius of curvature for cylinder wedge body shape\nCreated Radius of curvature for sphere cone body shape\nCreated Axial force coefficient\nCreated Coefficient of drag\nCreated Coefficient of pressure with similarity parameters\nCreated Deflection angle\nCreated Drag force\nCreated Dynamic pressure\nCreated Dynamic Pressure given Coefficient of Lift\nCreated Fourier's Law of Heat Conduction\nCreated Hypersonic similarity parameter\nCreated Lift coefficient\nCreated Lift Force\nCreated Mach number with fluids\nCreated Mach ratio at high mach number\nCreated Moment coefficient\nCreated Newtonian sine squared law for pressure coefficient\nCreated Normal Force Coefficient\nCreated Pressure ratio for high Mach number\nCreated Pressure ratio having high mach number with similarity constant\nCreated Shear-stress distribution.\nCreated Supersonic expression for pressure coefficient on surface with local deflection angle\nCreated Non dimensional radius for hypersonic vehicles\nCreated Non-dimensional density\nCreated Non-dimensional density for high mach number\nCreated Non-dimensional parallel velocity component for high mach number\nCreated Non-dimensional perpendicular velocity component for high mach number\nCreated Non-dimensional pressure\nCreated Non-dimensional pressure for high mach number\nCreated Slenderness ratio with cone radius for hypersonic vehicle\nCreated Transformed conical variable\nCreated Transformed conical variable with cone angle in hypersonic flow\nCreated Transformed conical variable with wave angle\nCreated Density before shock formation for compression wave\nCreated Density before Shock formation for Expansion Wave\nCreated Detachment distance of cylinder wedge body shape\nCreated Detachment distance of sphere cone body shape\nCreated Grid point calculation for shock waves\nCreated Local shock velocity equation\nCreated Mach wave behind shock\nCreated Mach wave behind shock with mach infinity\nCreated New pressure after shock formation for compression wave\nCreated New pressure after shock formation, subtracted to velocity for expansion wave\nCreated Nose radius of cylinder-wedge\nCreated Nose radius of sphere cone\nCreated Pressure ratio for unsteady waves\nCreated Pressure ratio for unsteady waves with subtracted induced mass motion for expansion waves\nCreated Ratio of new and old temperature\nCreated Ratio of new and old temperature for expansion waves\nCreated Temperature ratio for unsteady compression waves\nCreated Temperature ratio for unsteady expansion wave\nCreated Change in velocity for hypersonic flow in x direction\nCreated Coefficient of pressure with slenderness ratio\nCreated Coefficient of pressure with slenderness ratio and similarity constant\nCreated Constant g used for finding the location of perturbed shock\nCreated Density ratio with similarity constant having slenderness ratio\nCreated Distance from tip of leading edge to base\nCreated Doty and Rasmussen - normal-force coefficient\nCreated Inverse of density for hypersonic flow\nCreated Inverse of density for hypersonic flow using Mach number\nCreated Non dimensional Change in hypersonic disturbance velocity in x direction\nCreated Non dimensional Change in hypersonic disturbance velocity in y direction\nCreated Non dimensional pressure equation with slenderness ratio\nCreated Non dimensional Velocity disturbance in y direction in hypersonic flow\nCreated Non- dimensionalised time\nCreated Rasmussen closed form expression for shock wave angle\nCreated Similarity constant equation using wave angle\nCreated Similarity constant equation with slenderness ratio\nCreated Boundary-layer momentum thickness using Reynolds number at transition point\nCreated Eddy viscosity calculation\nCreated Local Mach Number using Reynolds number equation at transition region\nCreated Location of transition point\nCreated Prandtl number of transition flow\nCreated Reynolds number equation using boundary-layer momentum thickness\nCreated Reynolds number equation using local Mach number\nCreated Specific heat at constant pressure for transient flow\nCreated Static density at transition point\nCreated Static density equation using boundary-layer momentum thickness\nCreated Static velocity at transition point\nCreated Static velocity using boundary-layer momentum thickness\nCreated Static viscosity at transition point\nCreated Static viscosity equation using boundary-layer momentum thickness\nCreated Thermal conductivity of transition flow\nCreated Transition Reynolds number\nCreated Large Mach number over flat plate using static temperature and wall temperature\nCreated Mach number over flat plate using static temperature and wall temperature\nCreated Pressure ratio for cold-wall case the strong interaction\nCreated Pressure ratio for cold-wall case the weak interaction\nCreated Static density of fluid over flat plate\nCreated Static temperature of fluid over flat plate\nCreated Static temperature of plate using wall viscosity\nCreated Static temperature over flat plate under viscous, very high Mach flow\nCreated Static temperature over flat plate using Static Mach Number\nCreated Static viscosity using wall temperature and static temperature\nCreated Total temperature over flat plate under viscous Mach flow\nCreated Total temperature over flat plate under viscous very high Mach flow\nCreated viscosity of wall using wall temperature and static temperature\nCreated Wall density of fluid over flat plate\nCreated Wall temperature of fluid over the flat plate\nCreated Wall temperature of plate using wall viscosity\nCreated Wall temperature over flat plate under viscous, very high Mach flow\nCreated Wall temperature over flat plate using Static Mach Number\nVerified Radius of Rankine circle\n13 More Ideal flow or Potential flow Calculators\nVerified Boundary layer thickness for Turbulent flow\nVerified Center of pressure location for cambered airfoil\nVerified Skin friction drag coefficient for flat plate in laminar flow\nVerified Skin friction drag coefficient for flat plate in Turbulent flow\n4 More Incompressible flow over airfoil Calculators\nVerified Height or depth of paraboloid for volume of air\nVerified Total Pressure Force at Bottom of Cylinder\nVerified Total pressure force on top of cylinder\n4 More Kinematics of flow Calculators\nCreated Newtonian Dynamic pressure\nCreated Newtonian pressure distribution over the surface using cosine angle\nCreated Nusselt number at stagnation point\nCreated Nusselt number for stagnation point on blunt body\nCreated Reynolds analogy factor in Finite deference method\nCreated Reynolds analogy for Stanton number in finite difference method\nCreated Stagnation pressure\nVerified Landing ground roll distance\n3 More Landing Performance Calculators\nVerified Length of pipe for difference of pressure in viscous flow\nVerified Length of pipe for head loss due to friction in viscous flow\nVerified Length of pipe for loss of pressure head in viscous flow\nVerified Flight velocity for given elevator hinge moment coefficient\n4 More Longitudinal Control of Elevator Hinge Moment Calculators\nVerified Loss of head due to friction\nVerified Loss of pressure head for viscous flow through circular pipe\n1 More Loss of Head Calculators\nVerified Head loss due to friction for the efficiency of power transmission\nVerified Loss of head at the exit of pipe\nVerified Loss of head due to obstruction in pipe\n5 More Loss of Head Calculators\nCreated Area of shear plane for given shear angle, width of cut and uncut chip thickness\nCreated Coefficient of friction for given friction angle\nCreated Coefficient of friction for given thrust force, cutting force and normal rake angle\nCreated Coefficient of Friction given Forces Normal and along Tool Rake Face\nCreated Force acting Normal to Rake Face given Cutting Force and Thrust Force\nCreated Side cutting edge angle for given width of cut\nCreated Width of cut for given side cutting edge angle\n15 More Merchant Force Circle (Mechanics of Orthogonal metal cutting) Calculators\nCreated Convective heat transfer to surface\nCreated Radiative heat transfer to surface stemming from thermal radiation\nCreated Coefficient of drag equation with angle of attack\nCreated Coefficient of drag equation with coefficient of normal force\nCreated Coefficient of lift equation with angle of attack\nCreated Coefficient of lift equation with coefficient of normal force\nCreated Drag force with angle of attack\nCreated Exact normal shock wave maximum coefficient of pressure\nCreated Force Exerted on Surface given Static Pressure\nCreated Lift force with angle of attack\nCreated Mass Flux Incident on Surface Area\nCreated Maximum Pressure coefficient\nCreated Modified Newtonian Law\nCreated Pressure coefficient for slender 2D bodies\nCreated Pressure coefficient for slender bodies of revolution\nCreated Time rate of change of momentum of mass flux\nVerified Characteristic Mach number\nVerified Critical speed of sound from Prandtl relation\nVerified Mach Number behind Shock\nVerified Prandtl Condition\n16 More Normal Shock Waves Calculators\nVerified Length of Crest of Weir or Notch\nVerified Time required to Empty Reservoir\n4 More Notches and Weirs Calculators\nVerified Component of Upstream Mach normal to oblique shock\n18 More Oblique Shock and Expansion Waves Calculators\nCreated Coefficient of pressure derived from oblique shock theory\nCreated Density ratio when Mach become infinite\nVerified Dynamic pressure for given specific heat ratio and Mach number\nCreated Exact Density Ratio\nCreated Exact pressure ratio\nCreated Non-dimensional pressure coefficient\nCreated Parallel upstream flow components after shock as Mach tends to infinite\nCreated Perpendicular Upstream Flow Components behind Shock Wave\nCreated Pressure Coefficient behind Oblique Shock Wave\nCreated Pressure Coefficient behind Oblique Shock Wave for Infinite Mach Number\nCreated Pressure ratio when Mach becomes infinite\nCreated Temperature ratio when Mach becomes infinite\nCreated Temperature ratios\nCreated Velocity of sound using dynamic pressure and density\nCreated Wave angle for small deflection angle\nVerified Area at vena contracta for discharge and constant head\nVerified Area of Mouthpiece in Borda's Mouthpiece Running Free\nVerified Area of Mouthpiece in Borda's Mouthpiece Running Full\nVerified Area of Orifice given Time of Emptying Hemispherical Tank\nVerified Coefficient of Contraction given Area of Orifice\nVerified Head of Liquid above Centre of Orifice\nVerified Horizontal distance for coefficient of velocity and vertical distance\nVerified Vertical distance for coefficient of velocity and horizontal distance\n8 More Orifices and Mouthpieces Calculators\nVerified Interface temperature of composite wall of 2 layers given outer surface temperature\nVerified Total Thermal Resistance of Plane Wall with Convection on both Sides\n20 More Plane walls Calculators\nVerified Shaft brake power for reciprocating engine-propeller combination\n2 More Power Available and Maximum Velocity Calculators\nVerified Power required for given aerodynamic coefficients\nVerified Thrust of aircraft required for given required power\nVerified Weight of aircraft for given required power\n7 More Power Required for Level Unaccelerated Flight Calculators\nVerified Efficiency of power transmission in flow through pipes\n2 More Power Transmission Efficiency Calculators\nVerified Pressure at upstream point on streamline by Bernoulli's equation\nVerified Pressure Coefficient\nVerified Static pressure in incompressible flow\nVerified Surface pressure coefficient for non-lifting flow over circular cylinder\n9 More Pressure Calculators\nVerified Density of fluid considering velocity at outlet of orifice\n3 More Pressure and Density Calculators\nVerified Radial velocity for non-lifting flow over circular cylinder\n2 More Radial Velocity Calculators\nVerified Endurance for given Lift-to-Drag ratio of Jet Airplane\nVerified Lift-to-Drag ratio for given Endurance of Jet Airplane\nVerified Range of Jet Airplane\nVerified Thrust-Specific Fuel Consumption for given Endurance and Lift-to-Drag ratio of Jet Airplane\nVerified Thrust-Specific Fuel Consumption for given Range of Jet Airplane\n2 More Range and Endurance of Jet Airplane Calculators\nVerified Lift-to-Drag ratio for given Range of Propeller-Driven Airplane\n9 More Range and Endurance of Propeller-Driven Airplane Calculators\nVerified Rate of Climb\nVerified Rate of Climb for given excess power\nVerified Weight of aircraft for given excess power\n7 More Rate of Climb Calculators\nCreated Chord length for flat plate case\nCreated Local Reynolds number\nCreated Local turbulent skin-friction coefficient for incompressible flow\nCreated Mach number at reference temperature\nCreated Overall skin-friction drag coefficient\nCreated Overall skin-friction drag coefficient for incompressible flow\nCreated Reference temperature equation\nCreated Reynolds number for chord length\nCreated Reynolds number for chord length using Overall skin-friction drag coefficient\nCreated Stanton Number obtained from classical theory\nCreated Static density of plate using chord length for flat plate case\nCreated Static velocity of plate using chord length for flat plate case\nCreated Static viscosity of plate using chord length for flat plate case\nCreated Turbulent flat-plate skin-friction coefficient\nCreated Wall temperature using reference temperature\nCreated Atomic Radius in SCC\nCreated Lattice Constant of SCC\nCreated Total volume of atoms in SCC\nCreated Coefficient of pressure equation using specific heat ratio\nCreated curvilinear grid location equation\nCreated Density equation using enthalpy and pressure\nCreated Enthalpy equation using coefficient of pressure for calorically perfect gas\nCreated Enthalpy equation using pressure and density\nCreated Enthalpy equation using specific heat ratio\nCreated Free stream enthalpy\nCreated local shock-layer thickness\nCreated pressure equation using Enthalpy and density\nCreated total enthalpy equation using specific heat ratio and velocities\nCreated Total specific enthalpy\nVerified Heat flow rate through spherical wall\nVerified Thermal Resistance of Spherical Wall\n9 More Spheres Calculators\nVerified Radius of the cylinder for a single stagnation point\n3 More Stagnation Points Calculators\nVerified Stagnation temperature considering the compressible fluid flow\n1 More Stagnation Temperature Calculators\nVerified Elevator chord length for given stick force\nVerified Elevator Stick force for given gearing ratio\n5 More Stick Forces Calculators\nVerified Stream function for flow over Rankine oval\nVerified Stream function for uniform incompressible flow in polar coordinates\n7 More Stream Function Calculators\nVerified Angle of incidence of wing\nVerified Mean aerodynamic chord for given tail pitching moment coefficient\n19 More Tail Contribution Calculators\nVerified Horizontal tail volume ratio for given pitching moment coefficient\nVerified Tail Efficiency for given tail volume ratio\nVerified Tail lift coefficient for given tail volume ratio\nVerified Tail Pitching moment coefficient for given tail volume ratio\n10 More Tail Contribution Contd Calculators\nVerified Liftoff velocity for given stall velocity\n13 More Takeoff Performance Calculators\nVerified Tangential velocity for 2-D Vortex flow\nVerified Tangential velocity for lifting flow over circular cylinder\n1 More Tangential Velocity Calculators\nCreated Thrust force for given frictional force along tool rake face, cutting force and normal rake angle\nCreated Thrust Force given Cutting Force and Normal Rake Angle\n2 More Thrust Force Calculators\nVerified Minimum Thrust required for given weight\nVerified Thrust-to-weight ratio\n10 More Thrust Required for Level Unaccelerated Flight Calculators\nVerified Time of Emptying Hemispherical Tank\n2 More Time of Emptying Calculators\nVerified Total torque measured by strain in rotating cylinder method\n4 More Torque Required Calculators\nVerified Shear stress developed for turbulent flow in pipes\nVerified Shear velocity for turbulent flow in pipes\n16 More Turbulent flow Calculators\nVerified Weight of aircraft during level turn\n11 More Turning Flight Calculators\nValves (1)\nVerified Retarding force for gradual closure of valves\n3 More Valves Calculators\nVerified Angular speed of outer cylinder in rotating cylinder method\nVerified Maximum velocity at any Radius using Velocity\nVerified Velocity at any radius, radius of pipe, and maximum velocity\nVerified Velocity of piston or body for movement of piston in dash-pot\n1 More Velocity Calculators\nVerified Freestream velocity for a single stagnation point\nVerified Tangential velocity for single stagnation point\n4 More Velocity Calculators\nVerified Velocity at the outlet for head loss at the exit of pipe\nVerified Velocity of Fluid for Head Loss due to Obstruction in Pipe\nVerified Velocity of liquid at vena-contracta\n6 More Velocity of Flow Calculators\nVerified Velocity of sound wave in terms of an adiabatic process\nVerified Velocity of sound wave in terms of isothermal process\n4 More Velocity of Sound Wave Calculators\nVerified Velocity potential for 2-D doublet flow\nVerified Velocity potential for 2-D source flow\nVerified Velocity potential for uniform incompressible flow in polar coordinates\n2 More Velocity Potential Calculators\nVerified Viscosity of fluid or oil for movement of piston in dash-pot\nVerified Viscosity of fluid or oil in rotating cylinder method\n3 More Viscosity of Fluid Calculators\nVerified Radius of pipe from maximum velocity and velocity at any radius\n19 More Viscous flow Calculators\nCreated Aerodynamic heating to surface\nCreated Chapman–Rubesin factor\nCreated Coefficient of friction using Stanton equation for incompressible flow\nCreated Density calculation using Chapman–Rubesin factor\nCreated Internal Energy for Hypersonic Flow\nCreated Non dimensional internal energy parameter\nCreated Non dimensional internal energy parameter using wall-to-freestream temperature ratio\nCreated Non dimensional static enthalpy\nCreated Stanton equation using Overall skin friction coefficient for incompressible flow\nCreated Stanton number for incompressible flow\nCreated Static Density calculation using Chapman–Rubesin factor\nCreated Static enthalpy\nCreated Static viscosity calculation using Chapman–Rubesin factor\nCreated Thermal conductivity using Prandtl number\nCreated Viscosity calculation using Chapman–Rubesin factor\nCreated Wall temperature calculation using internal energy change", null, "Let Others Know" ]
[ null, "https://ipv4.calculatoratoz.com/Images/UserInfo//sanjay.jpg", null, "https://ipv4.calculatoratoz.com/Images/share.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7305544,"math_prob":0.8512181,"size":38138,"snap":"2022-40-2023-06","text_gpt3_token_len":9020,"char_repetition_ratio":0.24159543,"word_repetition_ratio":0.2371617,"special_character_ratio":0.16983062,"punctuation_ratio":0.0094690565,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98400104,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T03:59:25Z\",\"WARC-Record-ID\":\"<urn:uuid:117420e7-0b65-47b9-83ff-e687d34d8890>\",\"Content-Length\":\"406119\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed9c9d4b-8e03-438f-bc22-88e91ee9b536>\",\"WARC-Concurrent-To\":\"<urn:uuid:f937636c-5f2b-4440-8a9e-8dcee8ee0c2e>\",\"WARC-IP-Address\":\"67.43.15.151\",\"WARC-Target-URI\":\"https://ipv4.calculatoratoz.com/en/sanjay-krishna/847335e4-0c39-40b1-881f-94a636b99244/profile\",\"WARC-Payload-Digest\":\"sha1:SKOPATWB6XUQUVG6HQFFFJS2IL2YFOIL\",\"WARC-Block-Digest\":\"sha1:IY4F2IOWZZBKBIT6DQKFQVJFHBFZEA3N\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334644.42_warc_CC-MAIN-20220926020051-20220926050051-00144.warc.gz\"}"}
https://www.convertunits.com/from/cubic+inch/to/hectolitre
[ "## Convert cubic inch to hectoliter\n\n cubic inch hectolitre\n\nHow many cubic inch in 1 hectolitre? The answer is 6102.37438368.\nWe assume you are converting between cubic inch and hectoliter.\nYou can view more details on each measurement unit:\ncubic inch or hectolitre\nThe SI derived unit for volume is the cubic meter.\n1 cubic meter is equal to 61023.7438368 cubic inch, or 10 hectolitre.\nNote that rounding errors may occur, so always check the results.\nUse this page to learn how to convert between cubic inches and hectoliters.\nType in your own numbers in the form to convert the units!\n\n## Want other units?\n\nYou can do the reverse unit conversion from hectolitre to cubic inch, or enter any two units below:\n\n## Enter two units to convert\n\n From: To:\n\n## Definition: Cubic inch\n\nA cubic inch is the volume of a cube which is one inch long on each edge. It is equal to 16.387064 cm³.\n\n## Definition: Hectolitre\n\nA hectolitre (hL or hl) is volume measure and a metric unit equal to 100 litres, or 10^?1 m^3.\n\n## Metric conversions and more\n\nConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3\", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.798507,"math_prob":0.9580226,"size":1510,"snap":"2022-40-2023-06","text_gpt3_token_len":394,"char_repetition_ratio":0.1998672,"word_repetition_ratio":0.0,"special_character_ratio":0.24635762,"punctuation_ratio":0.14330219,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95882004,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T09:06:53Z\",\"WARC-Record-ID\":\"<urn:uuid:e039695a-ff1e-4453-bc2a-9593d46ee4e9>\",\"Content-Length\":\"38056\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aafbb674-12f3-441c-9293-575a9f24a886>\",\"WARC-Concurrent-To\":\"<urn:uuid:f0db8569-5dfe-4bcc-9bdd-fb24c989e3f2>\",\"WARC-IP-Address\":\"52.5.245.163\",\"WARC-Target-URI\":\"https://www.convertunits.com/from/cubic+inch/to/hectolitre\",\"WARC-Payload-Digest\":\"sha1:DUF7AM43YKQOG6YC3RZ35NPAOQ7CAE24\",\"WARC-Block-Digest\":\"sha1:TRXF6233TGWDNYSNRN6JI2PPIBIHXCYW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500392.45_warc_CC-MAIN-20230207071302-20230207101302-00117.warc.gz\"}"}
https://ask.sagemath.org/question/8185/magma-object-from-magmaeval/
[ "# Magma object from magma.eval()?\n\nHow can I get a Magma object to use inside of Sage from something created in Magma through magma.eval()? Is this possible?\n\nedit retag close merge delete\n\nSort by » oldest newest most voted", null, "Suppose you created a polynomial in Magma with the following command:\n\n sage: magma.eval('R<x> := PolynomialRing(RationalField()); f := (x-17/2)^3;')\n\n\nThen, you can get a Sage version of that object like this:\n\n sage: magma('f').sage()\n\n\nThe magma('f') part creates an object (MagmaElement) in Sage which is basically a pointer to the variable f in the Magma session. (You can actually use this perform calls on this objects which will translate to Magma commands. For example, magma('f').Factorization() is basically same as doing magma.eval('result := Factorization(f);') and returning magma('result').) The sage() method will convert a MagmaElement to the corresponding Sage object if possible.\n\nmore" ]
[ null, "https://www.gravatar.com/avatar/942da5ecdfa283932093423444de5011", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7943152,"math_prob":0.6296997,"size":1507,"snap":"2022-05-2022-21","text_gpt3_token_len":367,"char_repetition_ratio":0.117099136,"word_repetition_ratio":0.22123894,"special_character_ratio":0.23954877,"punctuation_ratio":0.13194445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9567182,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T13:18:16Z\",\"WARC-Record-ID\":\"<urn:uuid:fce4dc6a-c0c0-4dee-956f-ec8f5939dd9c>\",\"Content-Length\":\"51374\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c24c25c-acb0-40e0-b2b3-0ddad4e2bbb3>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba25e18c-63de-4f7a-8f55-919c91bb254d>\",\"WARC-IP-Address\":\"194.254.163.53\",\"WARC-Target-URI\":\"https://ask.sagemath.org/question/8185/magma-object-from-magmaeval/\",\"WARC-Payload-Digest\":\"sha1:2VDP24SN4WA6JJVOP3AONMUDPOZPMFDB\",\"WARC-Block-Digest\":\"sha1:LOJGJPW7QX5TDBY6F57QJRA4CYBKLY2F\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662539101.40_warc_CC-MAIN-20220521112022-20220521142022-00055.warc.gz\"}"}
https://lcmgcf.com/hcf-of-403-301-889-by-euclid-division-algorithm/
[ "# Highest Common Factor of 403, 301, 889 using Euclid's algorithm\n\nCreated By : Jatin Gogia\n\nReviewed By : Rajasekhar Valipishetty\n\nLast Updated : Apr 06, 2023\n\nHCF Calculator using the Euclid Division Algorithm helps you to find the Highest common factor (HCF) easily for 403, 301, 889 i.e. 1 the largest integer that leaves a remainder zero for all numbers.\n\nHCF of 403, 301, 889 is 1 the largest number which exactly divides all the numbers i.e. where the remainder is zero. Let us get into the working of this example.\n\nConsider we have numbers 403, 301, 889 and we need to find the HCF of these numbers. To do so, we need to choose the largest integer first and then as per Euclid's Division Lemma a = bq + r where 0 ≤ r ≤ b\n\nHighest common factor (HCF) of 403, 301, 889 is 1.\n\nHCF(403, 301, 889) = 1\n\n## HCF of 403, 301, 889 using Euclid's algorithm\n\nHighest common factor or Highest common divisor (hcd) can be calculated by Euclid's algotithm.\n\nHCF of:\n\nHighest common factor (HCF) of 403, 301, 889 is 1.", null, "### Highest Common Factor of 403,301,889 is 1\n\nStep 1: Since 403 > 301, we apply the division lemma to 403 and 301, to get\n\n403 = 301 x 1 + 102\n\nStep 2: Since the reminder 301 ≠ 0, we apply division lemma to 102 and 301, to get\n\n301 = 102 x 2 + 97\n\nStep 3: We consider the new divisor 102 and the new remainder 97, and apply the division lemma to get\n\n102 = 97 x 1 + 5\n\nWe consider the new divisor 97 and the new remainder 5,and apply the division lemma to get\n\n97 = 5 x 19 + 2\n\nWe consider the new divisor 5 and the new remainder 2,and apply the division lemma to get\n\n5 = 2 x 2 + 1\n\nWe consider the new divisor 2 and the new remainder 1,and apply the division lemma to get\n\n2 = 1 x 2 + 0\n\nThe remainder has now become zero, so our procedure stops. Since the divisor at this stage is 1, the HCF of 403 and 301 is 1\n\nNotice that 1 = HCF(2,1) = HCF(5,2) = HCF(97,5) = HCF(102,97) = HCF(301,102) = HCF(403,301) .\n\nWe can take hcf of as 1st numbers and next number as another number to apply in Euclidean lemma\n\nStep 1: Since 889 > 1, we apply the division lemma to 889 and 1, to get\n\n889 = 1 x 889 + 0\n\nThe remainder has now become zero, so our procedure stops. Since the divisor at this stage is 1, the HCF of 1 and 889 is 1\n\nNotice that 1 = HCF(889,1) .\n\n### HCF using Euclid's Algorithm Calculation Examples\n\nHere are some samples of HCF using Euclid's Algorithm calculations.\n\n### Frequently Asked Questions on HCF of 403, 301, 889 using Euclid's Algorithm\n\n1. What is the Euclid division algorithm?\n\nAnswer: Euclid's Division Algorithm is a technique to compute the Highest Common Factor (HCF) of given positive integers.\n\n2. what is the HCF of 403, 301, 889?\n\nAnswer: HCF of 403, 301, 889 is 1 the largest number that divides all the numbers leaving a remainder zero.\n\n3. How to find HCF of 403, 301, 889 using Euclid's Algorithm?\n\nAnswer: For arbitrary numbers 403, 301, 889 apply Euclid’s Division Lemma in succession until you obtain a remainder zero. HCF is the remainder in the last but one step." ]
[ null, "https://lcmgcf.com/media/featured_images/hcf-calculator.webp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85465664,"math_prob":0.98485553,"size":2675,"snap":"2023-14-2023-23","text_gpt3_token_len":820,"char_repetition_ratio":0.16136278,"word_repetition_ratio":0.15601504,"special_character_ratio":0.3435514,"punctuation_ratio":0.13122924,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997888,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T07:31:04Z\",\"WARC-Record-ID\":\"<urn:uuid:4fff12f8-7e5b-4094-ad13-2491bb5b07a1>\",\"Content-Length\":\"44855\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0d753233-4765-4a55-9e5f-ff665f0adffd>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1fd4cd6-b5ce-42d4-9f99-5f55e94e3036>\",\"WARC-IP-Address\":\"172.67.156.131\",\"WARC-Target-URI\":\"https://lcmgcf.com/hcf-of-403-301-889-by-euclid-division-algorithm/\",\"WARC-Payload-Digest\":\"sha1:Y7DPRWO65RZGGGPTNINZMVCYROFUNNJA\",\"WARC-Block-Digest\":\"sha1:2XAPKNSAMLQNTXPCYFLY7FDQQERJ6SLX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655446.86_warc_CC-MAIN-20230609064417-20230609094417-00414.warc.gz\"}"}
https://catchcabby.com/and-pdf/1333-multivariable-calculus-problems-and-solutions-pdf-275-298.php
[ "# Multivariable calculus problems and solutions pdf\n\n6.47  ·  9,310 ratings  ·  672 reviews", null, "## Calculus - Ron Larson, Bruce H. Edwards - Google Книги\n\nI was just wondering what a good online rescource would be for a quality look at the analysis of R n. We'll start with basic math and end up at quantum mechanics. The book introduces students to fundamentals of calculus, using advanced approach. Best Multivariable Calculus Book Reddit. Linear Algebra is almost universally considered dreadfully boring, but it's not too difficult. Calculus Textbook answers Questions Review. Ask our subject experts for help answering any of your homework questions!\nFile Name: multivariable calculus problems and solutions pdf.zip\nSize: 50178 Kb\nPublished 13.04.2019\n\n## Lagrange Multipliers Practice Problems\n\nJan 6, - Understanding Multivariable Calculus Problems Solutions and Tips pdf. Understanding Multivariable Calculus Problems Solutions and Tips pdf.\n\n## Module MA2E02", null, "I want to know opinions on a decent textbook to use. Part B - 15 questions - 45 minutes - Graphing calculator required. Velocity and Acceleration - In this section we will revisit a standard application of derivatives, cover all the aspects of calculus you are likely to encounter in a standard university course or series of courses on calculus.\n\nI know basic calculus and have taken courses in vector calculus. But I do not agree with those researchers. Directions: Solve each of the following problems. Improve your chances of college credit with our quizzes.\n\nAre You a Quality Pro. Linear span, and carefully graded problem sets that have made Stewart's texts best-sellers continue to provide a strong foundation for the Eighth Edition, as opposed to appearing in question sets, Finite Student Solution Manual for 4th edition of Vector Calculus Amd in linear algebra are central in many areas of mathemati. The multiple-choice questions on the AP Calculus AB exam cover a variety of calculus topic are discre. The patient explanat.\n\nDon Shimamoto, Swarthmore College. Textbook: Lay or Strang. Linear Models and Least Squares Regression. Textbook: Larson and Edwards Clemson sample calculus exams.\n\nIt reaches to students in more advanced courses such as Multivariable Calculus, and Analysis, Advanced Calculus is probably not too bad if you just want to learn to work computations. List of Topics Calculus III or Prob,ems Calculus is conceptually harder to understand its theorems and principles because Calc III often goes beyond three dimensions and gets harder to imagine what's going on. Kaplan. Most of these also come in single variable and multivariable models.\n\nBest self taught Calculus book. Directional Derivatives and Gradients. Hardy is in modern terms a good theoretical calculus book, containing enough material and sophistication for a transitional real analysis course. Possibly some matrix theory and more advanced topics.", null, "This book covers topics such as functions and models, limits and derivatives, differentiation and applications of differentiation, integrals, techniques of integration, vectors, vector functions, partial derivatives, multiple integrals, vector calculus, and second-order differential equations. Spivak, The hitchhiker's guide to calculus. Best book for calculus is.\n\nThe AP Calculus AB book has six sample exams consisting of 45 multiple-choice questions and 6 free-response questions in each sample exam. Surface Integrals - In this section we introduce the idea of a surface integral. Best book for calculus is. Select a chapter, section.\n\nYou need to know the terms and notation in order to successfully master the concepts. Vector calculus, but a lot of s students love .\n\nIt is well organized, and is rich with applications, applications to geometry 3. Triple produc. This coordinates system is very useful for dealing with spherical objects. For integral and differential calculus till 12th grade: Vikas Rahi proboems concepts of functions and calculus.\n\nMath 55 is a two-semester long first-year undergraduate mathematics course at Harvard Previously, the official title was Honors Advanced Calculus and Linear to enroll in a course such as Math Multivariable Calculus 19 students. I teach high school. I see connections between the mathematics that is used in the real world and the mathematics I teach in my classroom. Textbook: Edwards and Penney.\n\nDetermine the relative minimum and maximum values of the function? The Calculus examination covers skills and concepts that are usually taught in a one-semester college course in calculus. This environment will give strong mathematical concepts to students providing a clear roadmap to problem-solving. We will also define the normal line and discuss how the gradient vector can be used to find the equation of the normal line. Partial Derivatives - In this section we will the idea of partial derivatives!", null, "Here are a set of practice problems for the Calculus III notes. Click on the \" Solution \" link for each problem to go to the page containing the solution. Note that some sections will have more problems than others and some will have more or less of a variety of problems. Most sections should have a range of difficulty levels in the problems although this will vary from section to section. Here is a listing of sections for which practice problems have been written as well as a brief description of the material covered in the notes for that particular section. Practice Quick Nav Download.\n\n### Updated\n\nTaught by Professor Bruce H. UC Davis sample calculus exams? I'm abd the difference for CS folks is that they rarely use this stuff in the curriculum, you'll use calculus day and night. This course is available for EM credit.\n\nSequences and Sets. Click on the \" Solution \" link for each problem to go to the page containing the solution. Bradley sample Calc I exams. The best introductory textbook on multivariable calculus for the rank beginner that I know is Vector Calculus by Peter Baxandall and Hans Liebeck.\n\nLine Integrals - Part I - In this section we will start off with a quick review of parameterizing curves. Large collection of exams sorted by topics? Not open to students with credit for any course or above, or for any quarter-system class All subjects.\n\nLines and Planes in Space. In ahd, never heard of anyone taking Calculus: A word that triggers involuntary fear spasms in the best of us, in turn. I have never heard of Multivariable calculus until around this year? This familiarity will make calculus get easier and easier one day at a time.\n\nThe vitamin d cure book\n449 books — 10 voters\n\nand pdf\n\n1.", null, "Stevie D. says:\n2.", null, "Diddzintiowret says:\n3.", null, "Ares O. says:" ]
[ null, "https://catchcabby.com/img/e0d9e5a3d7450e69d0c5df263ed06335.jpg", null, "https://catchcabby.com/img/779a820ebd70d0fe094df4df616a6a18.jpg", null, "https://catchcabby.com/img/multivariable-calculus-problems-and-solutions-pdf.jpg", null, "https://catchcabby.com/img/669995.png", null, "https://catchcabby.com/img/9adc8eeb587b93e99bc8d79e01ccad21.png", null, "https://catchcabby.com/img/458209.png", null, "https://catchcabby.com/img/149460.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9161289,"math_prob":0.892504,"size":6381,"snap":"2020-34-2020-40","text_gpt3_token_len":1281,"char_repetition_ratio":0.13454603,"word_repetition_ratio":0.034653466,"special_character_ratio":0.18492399,"punctuation_ratio":0.10848644,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9784477,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T14:50:00Z\",\"WARC-Record-ID\":\"<urn:uuid:460130b0-c597-4012-9d34-33227a4e6d9e>\",\"Content-Length\":\"35937\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe43e77b-6218-438a-bb7d-7056ee0993a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:db5af622-61ec-4324-bc45-f5db8131eaef>\",\"WARC-IP-Address\":\"104.24.116.205\",\"WARC-Target-URI\":\"https://catchcabby.com/and-pdf/1333-multivariable-calculus-problems-and-solutions-pdf-275-298.php\",\"WARC-Payload-Digest\":\"sha1:D3CZOIJJRXCFMEMHMR4LOJTOJWBNJIZN\",\"WARC-Block-Digest\":\"sha1:UMPF3FNOYJF2IKHVG7RD44CAMGJZ4J4E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400226381.66_warc_CC-MAIN-20200925115553-20200925145553-00256.warc.gz\"}"}
https://tex.stackexchange.com/questions/239301/mhchem-macro-in-subscript
[ "# mhchem macro in subscript\n\nConsider the following MWE\n\n\\documentclass{article}\n\\usepackage[version=3]{mhchem}\n\\begin{document}\n\\begin{equation*}\n% c_\\ce{CO2} not working\nc_{\\ce{CO2}} \\quad c_\\text{working}\n\\end{equation*}\n\\end{document}\n\n\nwhich produce", null, "Now: why do I have to embrace (all the puns intended) the \\ce{CO2} macro with braces, while the \\text{} macro doesn't need them? The commented text produces an error missing { inserted c_\\ce{CO2}\n\n• The fact that \\text works without the braces is purely incidental and should not be relied upon. See tex.stackexchange.com/a/160538/4427 – egreg Apr 18 '15 at 17:51\n• To explain what happens, the expansion of \\text starts with {, which is then used for the subscript. – egreg Apr 18 '15 at 17:59\n• In the same way as \\mathrm{}, as you explain in the question above? – Holene Apr 18 '15 at 18:00\n• Not completely the same, but basically so. I think I have already explained the behavior somewhere, and I'm trying to find it. – egreg Apr 18 '15 at 18:01\n\n## 1 Answer\n\nLet's see what amstext.sty says:\n\n\\DeclareRobustCommand{\\text}{%\n\\ifmmode\\expandafter\\text@\\else\\expandafter\\mbox\\fi}\n\\def\\text@#1{{\\mathchoice\n{\\textdef@\\displaystyle\\f@size{#1}}%\n{\\textdef@\\textstyle\\f@size{\\firstchoice@false #1}}%\n{\\textdef@\\textstyle\\sf@size{\\firstchoice@false #1}}%\n{\\textdef@\\textstyle \\ssf@size{\\firstchoice@false #1}}%\n\\check@mathfonts\n}%\n}\n\n\nIf we are in math mode, when \\text{xyz} is found, TeX follows the “true” branch and so is presented with \\text@{xyz} (because the \\else...\\fi is discarded.\n\nThen it substitutes \\text@ with its definition, that is,\n\n{\\mathchoice{...}}\n\n\nand these additional braces keep _ happy. We must recall that _ in math mode causes expansion of the following tokens.\n\nI guess that the additional braces were introduced just for avoiding unscrutable errors that _\\text{xyz} would produce otherwise.\n\nUnfortunately, it allows that kind of “wrong” input. It's much similar to what happens with _\\mathrm{xyz}, that I have roughly explained in https://tex.stackexchange.com/a/160538/4427\n\nOn the other hand, the definition of \\ce has nothing of this kind:\n\n\\newcommand*{\\ce}{%\n\\ifx\\protect\\@typeset@protect\n\\csname ce \\expandafter\\endcsname\n\\else\n\\ifx\\protect\\@unexpandable@protect\n\\protect@unexpand@cmd@arg\\ce\n\\else\n\\ifx\\protect\\string\n\\protect@string@cmd@arg\\ce\n\\else\n\\expandafter\\protect@unknown@cmd@arg\n\\csname ce \\endcsname\n\\fi\n\\fi\n\\fi\n}\n\n\nThis will definitely make TeX unhappy upon seeing _\\ce{...}." ]
[ null, "https://i.stack.imgur.com/rJuVM.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74976844,"math_prob":0.50478786,"size":1760,"snap":"2019-13-2019-22","text_gpt3_token_len":539,"char_repetition_ratio":0.10535307,"word_repetition_ratio":0.018867925,"special_character_ratio":0.275,"punctuation_ratio":0.11875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96456283,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-25T15:47:22Z\",\"WARC-Record-ID\":\"<urn:uuid:da320cc0-e3a3-4a93-82ac-863841bd88fb>\",\"Content-Length\":\"132705\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ee0ef40-87f4-453e-a86d-037b42c23e6a>\",\"WARC-Concurrent-To\":\"<urn:uuid:92c74d1b-38bf-425d-8891-b5d507096420>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/239301/mhchem-macro-in-subscript\",\"WARC-Payload-Digest\":\"sha1:ZV7L5IBBRP2EZUB3ESDV3KTKCJBSH3QD\",\"WARC-Block-Digest\":\"sha1:ARHRDNKKNZOL4CZWY5S2FXL6ZSC4KQSV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232258120.87_warc_CC-MAIN-20190525144906-20190525170906-00320.warc.gz\"}"}
https://juliagraphs.org/Graphs.jl/dev/algorithms/editdist/
[ "# Edit distance\n\nGraphs.jl allows computation of the graph edit distance.\n\n## Full docs\n\nGraphs.MinkowskiCostMethod\nMinkowskiCost(μ₁, μ₂; p::Real=1)\n\nFor labels μ₁ on the vertices of graph G₁ and labels μ₂ on the vertices of graph G₂, compute the p-norm cost of substituting vertex u ∈ G₁ by vertex v ∈ G₂.\n\nOptional Arguments\n\np=1: the p value for p-norm calculation.\n\nsource\nGraphs.edit_distanceMethod\nedit_distance(G₁::AbstractGraph, G₂::AbstractGraph)\n\nCompute the edit distance between graphs G₁ and G₂. Return the minimum edit cost and edit path to transform graph G₁ into graph G₂. An edit path consists of a sequence of pairs of vertices(u,v) ∈ [0,|G₁|] × [0,|G₂|] representing vertex operations:\n\n• $(0,v)$: insertion of vertex $v ∈ G₂$\n• $(u,0)$: deletion of vertex $u ∈ G₁$\n• $(u>0,v>0)$: substitution of vertex $u ∈ G₁$ by vertex $v ∈ G₂$\n\nOptional Arguments\n\n• insert_cost::Function=v->1.0\n• delete_cost::Function=u->1.0\n• subst_cost::Function=(u,v)->0.5\n\nBy default, the algorithm uses constant operation costs. The user can provide classical Minkowski costs computed from vertex labels μ₁ (for G₁) and μ₂ (for G₂) in order to further guide the search, for example:\n\nedit_distance(G₁, G₂, subst_cost=MinkowskiCost(μ₁, μ₂))\n• heuristic::Function=DefaultEditHeuristic: a custom heuristic provided to the A*\n\nsearch in case the default heuristic is not satisfactory.\n\nPerformance\n\n• Given two graphs $|G₁| < |G₂|$, edit_distance(G₁, G₂) is faster to\n\ncompute than edit_distance(G₂, G₁). Consider swapping the arguments if involved costs are equivalent.\n\n• The use of simple Minkowski costs can improve performance considerably.\n• Exploit vertex attributes when designing operation costs.\n\nReferences\n\n• RIESEN, K., 2015. Structural Pattern Recognition with Graph Edit Distance: Approximation Algorithms and Applications. (Chapter 2)\n\nAuthor\n\n• Júlio Hoffimann Mendes ([email protected])\n\nExamples\n\njulia> g1 = SimpleDiGraph([0 1 0 0 0; 0 0 1 0 0; 1 0 0 1 0; 0 0 0 0 1; 0 0 0 1 0]);\n\njulia> g2 = SimpleDiGraph([0 1 0; 0 0 1; 1 0 0]);\n\njulia> edit_distance(g1, g2)\n(3.5, Tuple[(1, 2), (2, 1), (3, 0), (4, 3), (5, 0)])`\nsource" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59531826,"math_prob":0.9703508,"size":2365,"snap":"2022-40-2023-06","text_gpt3_token_len":738,"char_repetition_ratio":0.1253706,"word_repetition_ratio":0.039325844,"special_character_ratio":0.28118393,"punctuation_ratio":0.19087137,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969462,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T14:15:48Z\",\"WARC-Record-ID\":\"<urn:uuid:4e10ff7e-fcfe-4c3a-9d20-661ccc08ff98>\",\"Content-Length\":\"15263\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4da1941b-973f-4094-a267-a206a70e4d26>\",\"WARC-Concurrent-To\":\"<urn:uuid:30c9cdef-9905-4258-85fd-d6e12b0fdd97>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://juliagraphs.org/Graphs.jl/dev/algorithms/editdist/\",\"WARC-Payload-Digest\":\"sha1:BIB2WHSZKCYPOEFV6Z7HBTHTNYKZKDOE\",\"WARC-Block-Digest\":\"sha1:OQQJ6D7GSG4WC37QSNNICJ6D6UEGCP5L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335355.2_warc_CC-MAIN-20220929131813-20220929161813-00317.warc.gz\"}"}
https://www.nuclear-power.net/nuclear-engineering/thermodynamics/thermodynamic-cycles/ericsson-cycle-theory-and-efficiency/
[ "# Ericsson Cycle – Theory and Efficiency\n\nThe Ericsson cycle is named after a Swedish-American inventor John Ericsson, who designed and built many unique heat engines based on various thermodynamic cycles. He is credited with inventing two unique heat engine cycles and developing practical engines based on these cycles.\n\nHis first thermodynamic cycle “the first Ericsson cycle” is now called the “Brayton cycle“, in fact it is the closed Brayton cycle, which is commonly applied to modern closed cycle gas turbine engines.\n\nThe second Ericsson cycle is what is now called the Ericsson cycle. The second Ericsson cycle is similar to the Brayton cycle, but uses external heat and incorporates the multiple use of an intercooling and reheat. In fact, it is like a Brayton cycle with an infinite number of reheat and intercooler stages in the cycle. Compared to the Brayton cycle which uses adiabatic compression and expansion, an ideal Ericsson cycle consists of isothermal compression and expansion processes, combined with isobaric heat regeneration between them. Applying intercooling, heat regeneration and sequential combustion significantly increases thermal efficiency of a turbine, in fact, the thermal efficiency of the ideal Ericsson cycle equals to the Carnot efficiency.\n\nIt is assumed (in an ideal case) that each intercooler return the working fluid to the ambient temperature T1 and each reheater reheats the working fluid to the temperature T3. The regenerator is 100% efficient and allows the heat input for process 2 → 3 to be obtained from the heat rejected in process 4 → 1. Since there is no need of heat transfer (Qadd) in process 2 → 3, all the heat added externally would occur in the reheaters and all the heat rejected to the surroundings would take place in the intercoolers. As can be seen from the picture, in this case all the heat added would occur when the working fluid is at its highest temperature, T3, and all the heat rejected would take place when the working fluid is at its lowest temperature, T1. Since irreversibilities are presumed absent and all the heat is supplied and rejected isothermally, the thermal efficiency of an ideal Ericsson cycle can be calculated from these temperatures:", null, "where:\n\n• ηCarnot is the efficiency of Carnot cycle, i.e. it is the ratio = W/QH of the work done by the engine to the heat energy entering the system from the hot reservoir.\n• TC is the absolute temperature (Kelvins) of the cold reservoir,\n• TH is the absolute temperature (Kelvins) of the hot reservoir.\n\nAlthough the thermodynamic processes of the Ericsson cycle differ from those of the Carnot cycle, both cycles have the same value of thermal efficiency when operating between the temperatures TH and TC.\n\nReferences:\nNuclear and Reactor Physics:\n1. J. R. Lamarsh, Introduction to Nuclear Reactor Theory, 2nd ed., Addison-Wesley, Reading, MA (1983).\n2. J. R. Lamarsh, A. J. Baratta, Introduction to Nuclear Engineering, 3d ed., Prentice-Hall, 2001, ISBN: 0-201-82498-1.\n3. W. M. Stacey, Nuclear Reactor Physics, John Wiley & Sons, 2001, ISBN: 0- 471-39127-1.\n4. Glasstone, Sesonske. Nuclear Reactor Engineering: Reactor Systems Engineering, Springer; 4th edition, 1994, ISBN: 978-0412985317\n5. W.S.C. Williams. Nuclear and Particle Physics. Clarendon Press; 1 edition, 1991, ISBN: 978-0198520467\n6. Kenneth S. Krane. Introductory Nuclear Physics, 3rd Edition, Wiley, 1987, ISBN: 978-0471805533\n7. G.R.Keepin. Physics of Nuclear Kinetics. Addison-Wesley Pub. Co; 1st edition, 1965\n8. Robert Reed Burn, Introduction to Nuclear Reactor Operation, 1988.\n9. U.S. Department of Energy, Nuclear Physics and Reactor Theory. DOE Fundamentals Handbook, Volume 1 and 2. January 1993." ]
[ null, "https://www.nuclear-power.net/wp-content/uploads/2017/02/carnot-efficiency-equation.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9387491,"math_prob":0.73826534,"size":2692,"snap":"2021-21-2021-25","text_gpt3_token_len":566,"char_repetition_ratio":0.15848215,"word_repetition_ratio":0.04090909,"special_character_ratio":0.19167905,"punctuation_ratio":0.07949791,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9578647,"pos_list":[0,1,2],"im_url_duplicate_count":[null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-13T15:04:36Z\",\"WARC-Record-ID\":\"<urn:uuid:e117b0e6-ff04-4d24-a72a-69ac927b0469>\",\"Content-Length\":\"265049\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d49d0a76-eaeb-473d-b8f4-a5a76c20e021>\",\"WARC-Concurrent-To\":\"<urn:uuid:e608ea1f-63a2-4427-a0ce-cafd10f4945e>\",\"WARC-IP-Address\":\"104.21.66.209\",\"WARC-Target-URI\":\"https://www.nuclear-power.net/nuclear-engineering/thermodynamics/thermodynamic-cycles/ericsson-cycle-theory-and-efficiency/\",\"WARC-Payload-Digest\":\"sha1:F25O4C2QI7BM32CBCNI4URBVT7XAC672\",\"WARC-Block-Digest\":\"sha1:OVMPSHYFFA6ANRR7FEOXDKJ22DJBWHFJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989814.35_warc_CC-MAIN-20210513142421-20210513172421-00305.warc.gz\"}"}
https://math.stackexchange.com/questions/3615474/show-that-projection-transforms-open-sets-to-open-sets
[ "# Show that projection transforms open sets to open sets\n\nI want to show that $$f: \\mathbb{R}^n \\to \\mathbb{R}^m, m \\leq n, f(x_{1},...,x_{n})=(x_{1},...,x_{m})$$ transforms open sets to open sets. I think it sufficies to show that it transforms open disk into open disk, I'm not sure why and how to start working on that.\n\nIf $$B=B((x_1,x_2, \\ldots, x_n), r)$$ is an open ball in $$\\Bbb R^n$$, then show that $$f[B]= B((x_1,x_2, \\ldots, x_m), r)$$ where the last ball is taken in $$\\Bbb R^m$$ of course. The inclusion from left to right is obvious as $$f$$ decreases distances (we're leaving out $$n-m$$ coordinates with contributions $$\\ge 0$$ under the square root), and for the other inclusion we can add $$x_{m+1}, \\ldots, x_n$$ to a point in $$\\Bbb R^m$$ and the distance to the centre of $$B$$ will stay the same as its distance to $$(x_1,x_2, \\ldots, x_m)$$, etc.\nHint: Every norm (hence topology stemming from this norm, i.e. the natural topology) on $$\\mathbb{R}^d$$ is equivalent to any other norm. Take the norm $$\\Vert x \\Vert_{\\infty} = \\max_{i = 1, \\dots, d} |x_i|$$, so that your open disks $$B(x, r)$$ are of the form $$A_1 \\times A_2 \\times \\dots \\times A_d$$ with $$A_i = (x_i-r, x_i+r)$$.\nSince $$\\mathbb{R^n}$$ and $$\\mathbb{R^m}$$ are Banach spaces, and $$f$$ is a surjective and continuous linear operator, then by the open mapping theorem, $$f$$ is an open map." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8830917,"math_prob":1.0000081,"size":1494,"snap":"2021-31-2021-39","text_gpt3_token_len":494,"char_repetition_ratio":0.10067114,"word_repetition_ratio":0.0,"special_character_ratio":0.33801875,"punctuation_ratio":0.15697674,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000085,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-26T17:42:26Z\",\"WARC-Record-ID\":\"<urn:uuid:e6ee7c98-b128-4910-8923-fc9c9aa1c06f>\",\"Content-Length\":\"178221\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:348dad70-df4f-46a1-9b25-d2fcce54aa0b>\",\"WARC-Concurrent-To\":\"<urn:uuid:fee95a35-102b-4737-b0e2-76ac69c36740>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3615474/show-that-projection-transforms-open-sets-to-open-sets\",\"WARC-Payload-Digest\":\"sha1:4FTZPLNZD3SOBANBPUDIZWD34KRVX7NS\",\"WARC-Block-Digest\":\"sha1:YHN5SRYCYCT2JV6WGOMUX7QMWNGJ62RV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152144.81_warc_CC-MAIN-20210726152107-20210726182107-00311.warc.gz\"}"}
https://www.bartleby.com/solution-answer/chapter-22-problem-28e-chemistry-9th-edition/9781133611097/give-the-structure-for-each-of-the-following-a-4-methyl-1-pentyne-b-233-trimethyl-1-hexene-c/5c5c3003-a274-11e8-9bb5-0ece094302b6
[ "", null, "", null, "", null, "Chapter 22, Problem 28E\n\nChapter\nSection\nTextbook Problem\n\n# Give the structure for each of the following.a. 4-methyl-1-pentyneb. 2,3,3-trimethyl-1-hexenec. 3-ethyl-4-decene\n\n(a)\n\nInterpretation Introduction\n\nInterpretation: The structures of the given compounds are to be drawn.\n\nConcept introduction: Structure of any organic compound is drawn by following the sets of rules devised by IUPAC. Any structure denotes a particular compound. The root word determines the number of carbons while counting the longest carbon chain. Double of triple bond should be given lowest carbon number. If more than one substituent are present, prefixes like di, tri, tetra, etc. are used.\n\nExplanation\n\nExplanation\n\nTo determine: The structure of the given compound.\n\nThe given compound is 4 -methyl- 1 -pentyne. The word “ 1 -pentyne” means five carbon atoms are present in the longest carbon chain with one triple bond between first and second carbon\n\n(b)\n\nInterpretation Introduction\n\nInterpretation: The structures of the given compounds are to be drawn.\n\nConcept introduction: Structure of any organic compound is drawn by following the sets of rules devised by IUPAC. Any structure denotes a particular compound. The root word determines the number of carbons while counting the longest carbon chain. Double of triple bond should be given lowest carbon number. If more than one substituent are present, prefixes like di, tri, tetra, etc. are used.\n\n(c)\n\nInterpretation Introduction\n\nInterpretation: The structures of the given compounds are to be drawn.\n\nConcept introduction: Structure of any organic compound is drawn by following the sets of rules devised by IUPAC. Any structure denotes a particular compound. The root word determines the number of carbons while counting the longest carbon chain. Double of triple bond should be given lowest carbon number. If more than one substituent are present, prefixes like di, tri, tetra, etc. are used.\n\n### Still sussing out bartleby?\n\nCheck out a sample textbook solution.\n\nSee a sample solution\n\n#### The Solution to Your Study Problems\n\nBartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!\n\nGet Started\n\n#### Find more solutions based on key concepts", null, "" ]
[ null, "https://www.bartleby.com/static/search-icon-white.svg", null, "https://www.bartleby.com/static/close-grey.svg", null, "https://www.bartleby.com/static/solution-list.svg", null, "https://www.bartleby.com/static/logo.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78940064,"math_prob":0.8990192,"size":3442,"snap":"2019-43-2019-47","text_gpt3_token_len":743,"char_repetition_ratio":0.14717859,"word_repetition_ratio":0.48333332,"special_character_ratio":0.17809413,"punctuation_ratio":0.12099644,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97984713,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-13T17:19:29Z\",\"WARC-Record-ID\":\"<urn:uuid:b74dba80-12cf-43b1-97d3-7a7790f50aac>\",\"Content-Length\":\"667024\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b15f8926-5b88-46d1-986d-0e3bd28f2ac3>\",\"WARC-Concurrent-To\":\"<urn:uuid:f473fd2e-c140-40cf-b857-26a34568ea72>\",\"WARC-IP-Address\":\"99.86.230.91\",\"WARC-Target-URI\":\"https://www.bartleby.com/solution-answer/chapter-22-problem-28e-chemistry-9th-edition/9781133611097/give-the-structure-for-each-of-the-following-a-4-methyl-1-pentyne-b-233-trimethyl-1-hexene-c/5c5c3003-a274-11e8-9bb5-0ece094302b6\",\"WARC-Payload-Digest\":\"sha1:THAG4TWWFQBSQWRYSERV5PY3D7QNEVZK\",\"WARC-Block-Digest\":\"sha1:4GUEIN4YZ3AG5RPJTCLHAGSOLJWLTXMK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496667319.87_warc_CC-MAIN-20191113164312-20191113192312-00377.warc.gz\"}"}
https://jp.mathworks.com/matlabcentral/profile/authors/14644008
[ "Community Profile", null, "# Dev Gupta\n\nLast seen: 1年以上 前 2019 年からアクティブ\n\nLearner\n\n#### Statistics\n\n•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "バッジを表示\n\n#### Content Feed\n\nDecimation\nWhen dealing to the Roman Army, the term decimate meant that the entire unit would be broken up into groups of ten soldiers, and...\n\nThere are 10 types of people in the world\nThose who know binary, and those who don't. The number 2015 is a palindrome in binary (11111011111 to be exact) Given a year...\n\nBinary numbers\nGiven a positive, scalar integer n, create a (2^n)-by-n double-precision matrix containing the binary numbers from 0 through 2^n...\n\nImplement a bubble sort technique and output the number of swaps required\nA bubble sort technique compares adjacent items and swaps them if they are in the wrong order. This is done recursively until al...\n\nDot Product\n\nDot Product\n\nFinding peaks\nFind the peak values in the signal. The peak value is defined as the local maxima. For example, x= [1 12 3 2 7 0 3 1 19 7]; ...\n\nMax index of 3D array\nGiven a three dimensional array M(m,n,p) write a code that finds the three coordinates x,y,z of the Maximum value. Example ...\n\nReference Index Number\nGiven a reference set R of elements (each unique but identical in type), and a list V of elements drawn from the set R, possibly...\n\nSet a diagonal\nGiven a matrix M, row vector v of appropriate length, and diagonal index d (where 0 indicates the main diagonal and off-diagonal...\n\nCount consecutive 0's in between values of 1\nSo you have some vector that contains 1's and 0's, and the goal is to return a vector that gives the number of 0's between each ...\n\nCreate an n-by-n null matrix and fill with ones certain positions\nThe positions will be indicated by a z-by-2 matrix. Each row in this z-by-2 matrix will have the row and column in which a 1 has...\n\nGenerate Square Wave\nGenerate a square wave of desired length, number of complete cycles and duty cycle. Here, duty cycle is defined as the fraction ...\n\n2年以上 前\n\nBinary code (array)\nWrite a function which calculates the binary code of a number 'n' and gives the result as an array(vector). Example: Inpu...\n\n2年以上 前\n\nRelative ratio of \"1\" in binary number\nInput(n) is positive integer number Output(r) is (number of \"1\" in binary input) / (number of bits). Example: * n=0; r=...\n\n2年以上 前\n\nFind the longest sequence of 1's in a binary sequence.\nGiven a string such as s = '011110010000000100010111' find the length of the longest string of consecutive 1's. In this examp...\n\n2年以上 前\n\nGiven an unsigned integer x, find the largest y by rearranging the bits in x\nGiven an unsigned integer x, find the largest y by rearranging the bits in x. Example: Input x = 10 Output y is 12 ...\n\n2年以上 前\n\nBit Reversal\nGiven an unsigned integer _x_, convert it to binary with _n_ bits, reverse the order of the bits, and convert it back to an inte...\n\n2年以上 前\n\nConverting binary to decimals\nConvert binary to decimals. Example: 010111 = 23. 110000 = 48.\n\n2年以上 前\n\nFind out sum and carry of Binary adder\nFind out sum and carry of a binary adder if previous carry is given with two bits (x and y) for addition. Examples Previo...\n\n2年以上 前\n\n~~~~~~~ WAVE ~~~~~~~~~\n|The WAVE generator| Once upon a time there was a river. 'Sum' was passing by the river. He saw the water of the river that w...\n\n2年以上 前\n\nRemove the air bubbles\nGiven a matrix a, return a matrix b in which all the zeros have \"bubbled\" to the top. That is, any zeros in a given column shoul...\n\n2年以上 前\n\nFind Logic 32\n\n2年以上 前\n\nFind Logic 32\n\n2年以上 前 | 2 | 95 個のソルバー\n\nBack to basics 15 - classes\nCovering some basic topics I haven't seen elsewhere on Cody. Return the class of the input variable.\n\n2年以上 前\n\nFind Logic 31\n\n2年以上 前\n\nFind Logic 31\n\n2年以上 前 | 2 | 86 個のソルバー\n\nFind Logic 30\n\n2年以上 前\n\nFind Logic 30\n\n2年以上 前 | 2 | 90 個のソルバー\n\nPattern matching\nGiven a matrix, m-by-n, find all the rows that have the same \"increase, decrease, or stay same\" pattern going across the columns...\n\n2年以上 前" ]
[ null, "https://jp.mathworks.com/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/14644008_1548563151426_DEF.jpg", null, "https://jp.mathworks.com/matlabcentral/profile/hunt/MLC_Treasure_Hunt_badge.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/famous.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/quiz_master.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/curator.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/mathworks_generic_group.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/community_authored_group.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/puzzler.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/promoter.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/speed_demon.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/creator.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/solver.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6130793,"math_prob":0.96952474,"size":4003,"snap":"2022-40-2023-06","text_gpt3_token_len":1377,"char_repetition_ratio":0.14978744,"word_repetition_ratio":0.050065875,"special_character_ratio":0.28653508,"punctuation_ratio":0.1308305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9960507,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,2,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-08T18:23:40Z\",\"WARC-Record-ID\":\"<urn:uuid:6e86dd0a-a576-400e-9a13-aede6a842aa4>\",\"Content-Length\":\"128833\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca9a9fc2-5d0b-46ef-9d62-40827a7c75b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:1794ef0d-0b76-4161-960c-06338cf2615c>\",\"WARC-IP-Address\":\"104.86.80.92\",\"WARC-Target-URI\":\"https://jp.mathworks.com/matlabcentral/profile/authors/14644008\",\"WARC-Payload-Digest\":\"sha1:EBPNGANOHTYTXSRKRCEHDBPRXGPG5G27\",\"WARC-Block-Digest\":\"sha1:QGBDKPDRZGDNSELD6OHV5ZICOOWXZZ5H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500837.65_warc_CC-MAIN-20230208155417-20230208185417-00387.warc.gz\"}"}
https://homeschoolsolutions.org/tag/subtraction/
[ "Home » Posts tagged 'Subtraction'\n\n# Tag Archives: Subtraction\n\n## 10 Large Math Posters for Kids – Multiplication Chart, Division, Addition, Subtraction, Numbers 1-100 +, 3D Shapes, Fractions, Decimals, Percentages, Roman Numerals, Place Value, Money (PAPER) 18 x 24\n\n### 10 Large Math Posters for Kids – Multiplication Chart, Division, Addition, Subtraction, Numbers 1-100 +, 3D Shapes, Fractions, Decimals, Percentages, Roman Numerals, Place Value, Money (PAPER) 18 x 24", null, "• 10 Educational Math Posters for Kids – Multiplication Chart, Division, Addition, Subtraction, Numbers 1-100 +, 3D Shapes, Fractions, Decimals, Percentages, Roman Numerals, Place Value\n• 120 lb Poster Paper (NON – LAMINATED)\n\n10 Educational Math Posters for Kids – Multiplication Chart, Division, Addition, Subtraction, Numbers 1-100 +, 3D Shapes, Fractions, Decimals, Percentages, Roman Numerals, Place Value\n\nList Price: \\$ 19.95\n\nPrice: \\$ 19.95\n\n## 75 Worksheets for Daily Math Practice: Addition, Subtraction, Multiplication, Division: Maths Workbook\n\n### 75 Worksheets for Daily Math Practice: Addition, Subtraction, Multiplication, Division: Maths Workbook", null, "## Daily Math Practice 75 Worksheets\n\nThis e-book contains several math worksheets for practice. There is one worksheet for each type of math problem including different digits with operations of addition, subtraction, multiplication and division. These varying level of mathematical ability activities help in improving adding, subtracting, multiplying and dividing operation skills of the student by frequent practicing of the worksheets provided.\n\nThere is nothing more effective than a pencil and paper for practicing some math skills. These math worksheets are ideal for teachers, parents, students, and home schoolers. The companion ebook allows you to take print outs of these worksheets instantly or you can save them for later use. The learner can significantly improve math knowledge by developing a simple habit to daily practice the math drills.\n\nTutors and homeschoolers use the maths worksheets to test and measure the child’s mastery of basic math skills. These math drill sheets can save you precious planning time when homeschooling as you can use these work sheets to give extra practice of essential math skills. Parents use these mathematics worksheets for their kids homework practice too.\n\nDesigned for after school study and self study, it is used by homeschooler, special needs and gifted kids to add to the learning experience in positive ways. You can also use the worksheets during the summer to get your children ready for the upcoming school term. It helps your child excel in school as well as in building good study habits. If a workbook or mathematic textbook is not allowing for much basic practise, these sheets give you the flexibility to follow the practice that your student needs for an education curriculum.\n\nThese worksheets are not designed to be grade specific for students, rather depend on how much practice they’ve had at the skill in the past and how the curriculum in your school is organized. Kids work at their own level and their own pace through these activities. The learner can practice one worksheet a day, two worksheets a day, one every alternate day, one per week, two per week or can follow any consistent pattern. Make best use of your judgement.\n\nPrice:" ]
[ null, "https://images-na.ssl-images-amazon.com/images/I/61mzJrDbSdL._SL160_.jpg", null, "http://ecx.images-amazon.com/images/I/51Huqp03kWL._SL160_.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9094201,"math_prob":0.6840682,"size":3317,"snap":"2020-24-2020-29","text_gpt3_token_len":697,"char_repetition_ratio":0.146393,"word_repetition_ratio":0.21301775,"special_character_ratio":0.20590895,"punctuation_ratio":0.1524288,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96734387,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-05T08:47:40Z\",\"WARC-Record-ID\":\"<urn:uuid:9656ebbf-874d-480b-8aea-816941015a81>\",\"Content-Length\":\"45334\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19822927-2b80-477c-82cc-28e788330622>\",\"WARC-Concurrent-To\":\"<urn:uuid:3adbf258-3e85-43b2-aaad-6ffe7b84dc3e>\",\"WARC-IP-Address\":\"97.79.239.127\",\"WARC-Target-URI\":\"https://homeschoolsolutions.org/tag/subtraction/\",\"WARC-Payload-Digest\":\"sha1:W4SBH4WUYAIN2IJFFOYYRS6VEMC2FLLM\",\"WARC-Block-Digest\":\"sha1:NUW25WGY6VV3KAHPPJSMMJOQHSGL2EZ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348496026.74_warc_CC-MAIN-20200605080742-20200605110742-00338.warc.gz\"}"}
https://pythonpedia.com/en/tutorial/9050/ctypes
[ "ctypes\n\nIntroduction\n\n`ctypes` is a python built-in library that invokes exported functions from native compiled libraries.\n\nNote: Since this library handles compiled code, it is relatively OS dependent.\n\nBasic ctypes object\n\nThe most basic object is an int:\n\nNow, `obj` refers to a chunk of memory containing the value 12.\n\nThat value can be accessed directly, and even modified:\n\nSince `obj` refers to a chunk of memory, we can also find out it's size and location:\n\nBasic usage\n\nLet's say we want to use `libc`'s `ntohl` function.\n\nFirst, we must load `libc.so`:\n\nThen, we get the function object:\n\nAnd now, we can simply invoke the function:\n\nWhich does exactly what we expect it to do.\n\nCommon pitfalls\n\nThe first possible error is failing to load the library. In that case an OSError is usually raised.\n\nThis is either because the file doesn't exists (or can't be found by the OS):\n\nAs you can see, the error is clear and pretty indicative.\n\nThe second reason is that the file is found, but is not of the correct format.\n\nIn this case, the file is a script file and not a `.so` file. This might also happen when trying to open a `.dll` file on a Linux machine or a 64bit file on a 32bit python interpreter. As you can see, in this case the error is a bit more vague, and requires some digging around.\n\nFailing to access a function\n\nAssuming we successfully loaded the `.so` file, we then need to access our function like we've done on the first example.\n\nWhen a non-existing function is used, an `AttributeError` is raised:\n\nComplex usage\n\nLet's combine all of the examples above into one complex scenario: using `libc`'s `lfind` function.\n\nFor more details about the function, read the man page. I urge you to read it before going on.\n\nFirst, we'll define the proper prototypes:\n\nThen, let's create the variables:\n\nAnd now we define the comparison function:\n\nNotice that `x`, and `y` are `POINTER(c_int)`, so we need to dereference them and take their values in order to actually compare the value stored in the memory.\n\nNow we can combine everything together:\n\n`ptr` is the returned void pointer. If `key` wasn't found in `arr`, the value would be `None`, but in this case we got a valid value.\n\nNow we can convert it and access the value:\n\nAlso, we can see that `ptr` points to the correct value inside `arr`:\n\nctypes arrays\n\nAs any good C programmer knows, a single value won't get you that far. What will really get us going are arrays!\n\nThis is not an actual array, but it's pretty darn close! We created a class that denotes an array of 16 `int`s.\n\nNow all we need to do is to initialize it:\n\nNow `arr` is an actual array that contains the numbers from 0 to 15.\n\nThey can be accessed just like any list:\n\nAnd just like any other `ctypes` object, it also has a size and a location:\n\nWrapping functions for ctypes\n\nIn some cases, a C function accepts a function pointer. As avid `ctypes` users, we would like to use those functions, and even pass python function as arguments.\n\nLet's define a function:\n\nNow, that function takes two arguments and returns a result of the same type. For the sake of the example, let's assume that type is an int.\n\nLike we did on the array example, we can define an object that denotes that prototype:\n\nThat prototype denotes a function that returns an `c_int` (the first argument), and accepts two `c_int` arguments (the other arguments).\n\nNow let's wrap the function:\n\nFunction prototypes have on more usage: They can wrap `ctypes` function (like `libc.ntohl`) and verify that the correct arguments are used when invoking the function." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8500506,"math_prob":0.83809125,"size":3503,"snap":"2019-26-2019-30","text_gpt3_token_len":809,"char_repetition_ratio":0.12546442,"word_repetition_ratio":0.006279435,"special_character_ratio":0.22038253,"punctuation_ratio":0.12311901,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9814348,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T04:47:32Z\",\"WARC-Record-ID\":\"<urn:uuid:ea5570c3-35d3-423d-b661-643c22651e60>\",\"Content-Length\":\"89135\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:58b23bcc-8530-4cd6-ac7d-77c87773e10c>\",\"WARC-Concurrent-To\":\"<urn:uuid:06389508-9a99-4cae-a704-f93aceb7126d>\",\"WARC-IP-Address\":\"40.83.160.29\",\"WARC-Target-URI\":\"https://pythonpedia.com/en/tutorial/9050/ctypes\",\"WARC-Payload-Digest\":\"sha1:LKLDGPPB45TXQV4ZCZHMB66KDDHMHGNT\",\"WARC-Block-Digest\":\"sha1:XWQWDU5H2XCK7KOERAMSJYJTTR2N6TCX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998376.42_warc_CC-MAIN-20190617043021-20190617065021-00343.warc.gz\"}"}
https://7id.xray.aps.anl.gov/software/octave/html/interpreter/Plotting.html
[ "Next: , Previous: Input and Output, Up: Top\n\n## 17 Plotting\n\nAll of Octave's plotting functions use `gnuplot` to handle the actual graphics. Most types of plots can be generated using the basic plotting functions, which are patterned after the equivalent functions in Matlab. The use of these functions is generally straightforward, and is the preferred method for generating plots. However, for users familiar with `gnuplot`, or for some specialized applications where the basic commands are inadequate, Octave also provides two low-level functions, `gplot` and `gsplot`, that behave almost exactly like the corresponding `gnuplot` functions `plot` and `splot`. Also note that some advanced Matlab features from recent versions are not implemented, such as handle-graphics and related functions." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87305313,"math_prob":0.92474526,"size":784,"snap":"2023-40-2023-50","text_gpt3_token_len":167,"char_repetition_ratio":0.13076924,"word_repetition_ratio":0.0,"special_character_ratio":0.17219388,"punctuation_ratio":0.13333334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97574145,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-05T03:18:21Z\",\"WARC-Record-ID\":\"<urn:uuid:b0264d72-4240-4097-be27-b791b6de6247>\",\"Content-Length\":\"4411\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:01bbf72c-f6a3-408e-803b-32b2b26b56bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:50978864-93a2-4fba-ad8f-679c8a029828>\",\"WARC-IP-Address\":\"104.18.35.53\",\"WARC-Target-URI\":\"https://7id.xray.aps.anl.gov/software/octave/html/interpreter/Plotting.html\",\"WARC-Payload-Digest\":\"sha1:O2JWNZLU2JOU2JUKF2TJKVSCQWOI3YOE\",\"WARC-Block-Digest\":\"sha1:DSK5R6SE7T5STCTL4WMQZ2XFRRBRBVZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511717.69_warc_CC-MAIN-20231005012006-20231005042006-00214.warc.gz\"}"}
https://stats.stackexchange.com/questions/40905/arima-model-interpretation
[ "# ARIMA model interpretation\n\nI have a question about ARIMA models. Let's say I have a time series $Y_t$ that I would like to forecast and an $\\text{ARIMA}(2,2)$ model seems like a good way to conduct the forecasting exercise. $$\\Delta Y_t = \\alpha_1 \\Delta Y_{t-1} + \\alpha_2 \\Delta Y_{t-2} + \\nu_{t} + \\theta_1 \\nu_{t-1} + \\theta_2 \\nu_{t-2}$$ Now the lagged $Y$'s imply that my series today is influenced by prior events. This makes sense. But what is the interpretation of the errors? My prior residual (how off I was in my calculation) is influencing the value of my series today? How are the lagged residuals calculated in this regression as it is the product / remainder of the regression?\n\n• I think that you need to remember that ARIMA models are atheoretic models, so the usual rules of interpreting estimated regression coefficients do not strictly apply in the same way. ARIMA models have certain features to be aware of. For example, the lower the values of $\\alpha_{1}$ in an AR(1) then the quicker is the rate of convergence. But, take for instance, an AR(2) model. Not all AR(2) models are the same! For example, if the condition $(\\alpha_{1}^{2}+4\\alpha_{2}<0)$ is satisfied then the AR(2) displays pseudo periodic behaviour and as a result its forecasts are stochastic cycles. – Graeme Walsh Jul 1 '13 at 10:24\n• (cont...) In a somewhat similar manner, when dealing with vector autoregressions, one tends to interpret the impulse response functions (IRFs) rather than the estimated coefficients; the coefficients are often too difficult to interpret, but sense can usually be made of the IRFs. Out of curiosity, have you found many papers in which the author(s) dedicated much attention to interpreting the coefficients in an ARIMA model? – Graeme Walsh Jul 1 '13 at 10:29\n• There appears to be a notation problem. \"$\\text{ARIMA}(2,2)$\" can't be right, since ARIMA models have three terms $(p,d,q)$ for each of the AR/I/MA components respectively, while ARMA models have two (e.g. $\\text{ARMA}(2,2)$) - but you appear to have first differencing, which would suggest you mean $\\text{ARIMA}(2,1,2)$. Please edit to reflect your intent. – Glen_b -Reinstate Monica Jul 2 '13 at 3:17\n• @Glen_b I recall asking the same thing on another question. It turns out that we have a duplication of sorts. The present question and the one linked to are very similar. – Graeme Walsh Jul 2 '13 at 9:22\n\nI think that you need to remember that ARIMA models are atheoretic models, so the usual approach to interpreting estimated regression coefficients does not really carry over to ARIMA modelling.\n\nIn order to interpret (or understand) estimated ARIMA models, one would do well to be cognizant of the different features displayed by a number of common ARIMA models.\n\nWe can explore some of these features by investigating the types of forecasts produced by different ARIMA models. This is the main approach that I've taken below, but a good alternative would be to look at the impulse response functions or dynamic time paths associated with different ARIMA models (or stochastic difference equations). I'll talk about these at the end.\n\nAR(1) Models\n\nLet's consider an AR(1) model for a moment. In this model, we can say that the lower the value of $\\alpha_{1}$ then the quicker is the rate of convergence (to the mean). We can try to understand this aspect of AR(1) models by investigating the nature of the forecasts for a small set of simulated AR(1) models with different values for $\\alpha_{1}$.\n\nThe set of four AR(1) models that we'll discuss can be written in algebraic notation as: \\begin{equation} Y_{t} = C + 0.95 Y_{t-1} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (1)\\\\ Y_{t} = C + 0.8 Y_{t-1} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (2)\\\\ Y_{t} = C + 0.5 Y_{t-1} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (3)\\\\ Y_{t} = C + 0.4 Y_{t-1} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (4) \\end{equation} where $C$ is a constant and the rest of the notation follows from the OP. As can be seen, each model differs only with respect to the value of $\\alpha_{1}$.\n\nIn the graph below, I have plotted out-of-sample forecasts for these four AR(1) models. It can be seen that the forecasts for the AR(1) model with $\\alpha_{1} = 0.95$ converges at a slower rate with respect to the other models. The forecasts for the AR(1) model with $\\alpha_{1} = 0.4$ converges at a quicker rate than the others.", null, "Note: when the red line is horizontal, it has reached the mean of the simulated series.\n\nMA(1) Models\n\nNow let's consider four MA(1) models with different values for $\\theta_{1}$. The four models we'll discuss can be written as: \\begin{equation} Y_{t} = C + 0.95 \\nu_{t-1} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (5)\\\\ Y_{t} = C + 0.8 \\nu_{t-1} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (6)\\\\ Y_{t} = C + 0.5 \\nu_{t-1} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (7)\\\\ Y_{t} = C + 0.4 \\nu_{t-1} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (8) \\end{equation}\n\nIn the graph below, I have plotted out-of-sample forecasts for these four different MA(1) models. As the graph shows, the behaviour of the forecasts in all four cases are markedly similar; quick (linear) convergence to the mean. Notice that there is less variety in the dynamics of these forecasts compared to those of the AR(1) models.", null, "Note: when the red line is horizontal, it has reached the mean of the simulated series.\n\nAR(2) Models\n\nThings get a lot more interesting when we start to consider more complex ARIMA models. Take for example AR(2) models. These are just a small step up from the AR(1) model, right? Well, one might like to think that, but the dynamics of AR(2) models are quite rich in variety as we'll see in a moment.\n\nLet's explore four different AR(2) models:\n\n\\begin{equation} Y_{t} = C + 1.7 Y_{t-1} -0.8 Y_{t-2} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (9)\\\\ Y_{t} = C + 0.9 Y_{t-1} -0.2 Y_{t-2} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (10)\\\\ Y_{t} = C + 0.5 Y_{t-1} -0.2 Y_{t-2} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (11)\\\\ Y_{t} = C + 0.1 Y_{t-1} -0.7 Y_{t-2} + \\nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (12) \\end{equation}\n\nThe out-of-sample forecasts associated with each of these models is shown in the graph below. It is quite clear that they each differ significantly and they are also quite a varied bunch in comparison to the forecasts that we've seen above - except for model 2's forecasts (top right plot) which behave similar to those for an AR(1) model.", null, "Note: when the red line is horizontal, it has reached the mean of the simulated series.\n\nThe key point here is that not all AR(2) models have the same dynamics! For example, if the condition, \\begin{equation} \\alpha_{1}^{2}+4\\alpha_{2} < 0, \\end{equation} is satisfied then the AR(2) model displays pseudo periodic behaviour and as a result its forecasts will appear as stochastic cycles. On the other hand, if this condition is not satisfied, stochastic cycles will not be present in the forecasts; instead, the forecasts will be more similar to those for an AR(1) model.\n\nIt's worth noting that the above condition comes from the general solution to the homogeneous form of the linear, autonomous, second-order difference equation (with complex roots). If this if foreign to you, I recommend both Chapter 1 of Hamilton (1994) and Chapter 20 of Hoy et al. (2001).\n\nTesting the above condition for the four AR(2) models results in the following: \\begin{equation} (1.7)^{2} + 4 (-0.8) = -0.31 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (13)\\\\ (0.9)^{2} + 4 (-0.2) = 0.01 > 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (14)\\\\ (0.5)^{2} + 4 (-0.2) = -0.55 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (15)\\\\ (0.1)^{2} + 4 (-0.7) = -2.54 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (16) \\end{equation}\n\nAs expected by the appearance of the plotted forecasts, the condition is satisfied for each of the four models except for model 2. Recall from the graph, model 2's forecasts behave (\"normally\") similar to an AR(1) model's forecasts. The forecasts associated with the other models contain cycles.\n\nApplication - Modelling Inflation\n\nNow that we have some background under our feet, let's try to interpret an AR(2) model in an application. Consider the following model for the inflation rate ($\\pi_{t}$): \\begin{equation} \\pi_{t} = C + \\alpha_{1} \\pi_{t-1} + \\alpha_{2} \\pi_{t-2} + \\nu_{t}. \\end{equation} A natural expression to associate with such a model would be something like: \"inflation today depends on the level of inflation yesterday and on the level of inflation on the day before yesterday\". Now, I wouldn't argue against such an interpretation, but I'd suggest that some caution be drawn and that we ought to dig a bit deeper to devise a proper interpretation. In this case we could ask, in which way is inflation related to previous levels of inflation? Are there cycles? If so, how many cycles are there? Can we say something about the peak and trough? How quickly do the forecasts converge to the mean? And so on.\n\nThese are the sorts of questions we can ask when trying to interpret an AR(2) model and as you can see, it's not as straightforward as taking an estimated coefficient and saying \"a 1 unit increase in this variable is associated with a so-many unit increase in the dependent variable\" - making sure to attach the ceteris paribus condition to that statement, of course.\n\nBear in mind that in our discussion so far, we have only explored a selection of AR(1), MA(1), and AR(2) models. We haven't even looked at the dynamics of mixed ARMA models and ARIMA models involving higher lags.\n\nTo show how difficult it would be to interpret models that fall into that category, imagine another inflation model - an ARMA(3,1) with $\\alpha_{2}$ constrained to zero: \\begin{equation} \\pi_{t} = C + \\alpha_{1} \\pi_{t-1} + \\alpha_{3} \\pi_{t-3} + \\theta_{1}\\nu_{t-1} + \\nu_{t}. \\end{equation}\n\nSay what you'd like, but here it's better to try to understand the dynamics of the system itself. As before, we can look and see what sort of forecasts the model produces, but the alternative approach that I mentioned at the beginning of this answer was to look at the impulse response function or time path associated with the system.\n\nThis brings me to next part of my answer where we'll discuss impulse response functions.\n\nImpulse Response Functions\n\nThose who are familiar with vector autoregressions (VARs) will be aware that one usually tries to understand the estimated VAR model by interpreting the impulse response functions; rather than trying to interpret the estimated coefficients which are often too difficult to interpret anyway.\n\nThe same approach can be taken when trying to understand ARIMA models. That is, rather than try to make sense of (complicated) statements like \"today's inflation depends on yesterday's inflation and on inflation from two months ago, but not on last week's inflation!\", we instead plot the impulse response function and try to make sense of that.\n\nApplication - Four Macro Variables\n\nFor this example (based on Leamer(2010)), let's consider four ARIMA models based on four macroeconomic variables; GDP growth, inflation, the unemployment rate, and the short-term interest rate. The four models have been estimated and can be written as: \\begin{eqnarray} Y_{t} &=& 3.20 + 0.22 Y_{t-1} + 0.15 Y_{t-2} + \\nu_{t}\\\\ \\pi_{t} &=& 4.10 + 0.46 \\pi_{t-1} + 0.31\\pi_{t-2} + 0.16\\pi_{t-3} + 0.01\\pi_{t-4} + \\nu_{t}\\\\ u_{t} &=& 6.2+ 1.58 u_{t-1} - 0.64 u_{t-2} + \\nu_{t}\\\\ r_{t} &=& 6.0 + 1.18 r_{t-1} - 0.23 r_{t-2} + \\nu_{t} \\end{eqnarray} where $Y_{t}$ denotes GDP growth at time $t$, $\\pi$ denotes inflation, $u$ denotes the unemployment rate, and $r$ denotes the short-term interest rate (3-month treasury).\n\nThe equations show that GDP growth, the unemployment rate, and the short-term interest rate are modeled as AR(2) processes while inflation is modeled as an AR(4) process.\n\nRather than try to interpret the coefficients in each equation, let's plot the impulse response functions (IRFs) and interpret them instead. The graph below shows the impulse response functions associated with each of these models.", null, "Don't take this as a masterclass in interpreting IRFs - think of it more like a basic introduction - but anyway, to help us interpret the IRFs we'll need to accustom ourselves with two concepts; momentum and persistence.\n\nThese two concepts are defined in Leamer (2010) as follows:\n\nMomentum: Momentum is the tendency to continue moving in the same direction. The momentum effect can offset the force of regression (convergence) toward the mean and can allow a variable to move away from its historical mean, for some time, but not indefinitely.\n\nPersistence: A persistence variable will hang around where it is and converge slowly only to the historical mean.\n\nEquipped with this knowledge, we now ask the question: suppose a variable is at its historical mean and it receives a temporary one unit shock in a single period, how will the variable respond in future periods? This is akin to asking those questions we asked before, such as, do the forecasts contains cycles?, how quickly do the forecasts converge to the mean?, etc.\n\nAt last, we can now attempt to interpret the IRFs.\n\nFollowing a one unit shock, the unemployment rate and short-term interest rate (3-month treasury) are carried further from their historical mean. This is the momentum effect. The IRFs also show that the unemployment rate overshoots to a greater extent than does the short-term interest rate.\n\nWe also see that all of the variables return to their historical means (none of them \"blow up\"), although they each do this at different rates. For example, GDP growth returns to its historical mean after about 6 periods following a shock, the unemployment rate returns to its historical mean after about 18 periods, but inflation and short-term interest take longer than 20 periods to return to their historical means. In this sense, GDP growth is the least persistent of the four variables while inflation can be said to be highly persistent.\n\nI think it's a fair conclusion to say that we've managed (at least partially) to make sense of what the four ARIMA models are telling us about each of the four macro variables.\n\nConclusion\n\nRather than try to interpret the estimated coefficients in ARIMA models (difficult for many models), try instead to understand the dynamics of the system. We can attempt this by exploring the forecasts produced by our model and by plotting the impulse response function.\n\n[I'm happy enough to share my R code if anyone wants it.]\n\nReferences\n\n• Hamilton, J. D. (1994). Time series analysis (Vol. 2). Princeton: Princeton university press.\n• Leamer, E. (2010). Macroeconomic Patterns and Stories - A Guide for MBAs, Springer.\n• Stengos, T., M. Hoy, J. Livernois, C. McKenna and R. Rees (2001). Mathematics for Economics, 2nd edition, MIT Press: Cambridge, MA.\n• Love the application of IRF to non-VARs. They always seem to be associated and I'd never thought of using IRFs on mere ARIMAs. (That plus, who can really understand what MA terms do?) – Wayne Sep 14 '16 at 20:05\n• What a great answer! – Richard Hardy Mar 17 '17 at 11:40\n\nNote that due to Wold's decomposition theorem you can rewrite any stationary ARMA model as a $MA(\\infty)$ model, i.e. :\n\n$$\\Delta Y_t=\\sum_{j=0}^{\\infty} \\psi_j\\nu_{t-j}$$\n\nIn this form there are no lagged variables, so any interpretation involving notion of a lagged variable is not very convincing. However looking at the $MA(1)$ and the $AR(1)$ models separately:\n\n$$Y_t=\\nu_t+\\theta_{1}\\nu_{t-1}$$\n\n$$Y_t=\\rho Y_{t-1}+\\nu_{t}=\\nu_t+\\rho \\nu_{t-1}+ \\rho^2 \\nu_{t-1}+...$$\n\nyou can say that error terms in ARMA models explain \"short-term\" influence of the past, and lagged terms explain \"long-term\" influence. Having said that I do not think that this helps a lot and usually nobody bothers with the precise interpretation of ARMA coefficients. The goal usually is to get an adequate model and use it for forecasting.\n\n• +1 This is more or less what I was trying to get at in my comments above. – Graeme Walsh Jul 1 '13 at 10:50\n• Ha, I did not see your comments, when I was writing the answer. I suggest converting them to the answer. – mpiktas Jul 1 '13 at 10:52\n\nI totally agree with the sentiment of the previous commentators. I would like to add that all ARIMA model can also be represented as a pure AR model. These weights are referred to as the Pi weights as compared to the pure MA form (Psi weights) . In this way you can view (interpret) an ARIMA model as an optimized weighted average of the past values. In other words rather than assume a pre-specified length and values for a weighted average , an ARIMA model delivers both the length ($n$) of the weights and the actual weights ($c_1,c_2,...,c_n$).\n\n$$Y(t) =c_1 Y(t−1) + c_2 Y(t-2) + c_3 Y(t-3)+ ... + c_n Y(t-n) + a(t)$$\n\nIn this way an ARIMA model can be explained as the answer to the question\n\n1. How many historical values should I use to compute a weighted sum of the past?\n2. Precisely what are those values?" ]
[ null, "https://i.stack.imgur.com/iOK1i.png", null, "https://i.stack.imgur.com/KAegH.png", null, "https://i.stack.imgur.com/zPb1e.png", null, "https://i.stack.imgur.com/EGUsq.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9009019,"math_prob":0.9899731,"size":17143,"snap":"2019-51-2020-05","text_gpt3_token_len":4519,"char_repetition_ratio":0.15864404,"word_repetition_ratio":0.07112223,"special_character_ratio":0.3006475,"punctuation_ratio":0.10111011,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987485,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,7,null,7,null,7,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-11T21:36:23Z\",\"WARC-Record-ID\":\"<urn:uuid:d1154de2-e02b-4e21-b560-32e670e2306e>\",\"Content-Length\":\"173908\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:559572de-4d6e-4685-80e3-8213c28f1958>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ce3604f-702f-4ba4-bb52-2b8fdc214748>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/40905/arima-model-interpretation\",\"WARC-Payload-Digest\":\"sha1:YBY5LMMMINUN3SQDCUJQG4FAD44QSHOA\",\"WARC-Block-Digest\":\"sha1:7GPUUGQSCSEEEZKUI5PFDD3PT5K3J7FH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540533401.22_warc_CC-MAIN-20191211212657-20191212000657-00236.warc.gz\"}"}
http://proxy.osapublishing.org/oe/fulltext.cfm?uri=oe-11-21-2783&id=77672
[ "## Abstract\n\nHigh resolution wavefront sensors are devices with a great practical interest since they are becoming a key part in an increasing number of applications like extreme Adaptive Optics. We describe the optical differentiation wavefront sensor, consisting of an amplitude mask placed at the intermediate focal plane of a 4-f setup. This sensor offers the advantages of high resolution and adjustable dynamic range. Furthermore, it can work with polychromatic light sources. In this paper we show that, even in adverse low-light-level conditions, its SNR compares quite well to that corresponding to the Hartmann-Shack sensor.\n\n## 1. Introduction\n\nWavefront sensing is a technique that has been successfully applied in many different fields like optical quality testing, Adaptive Optics, etc. In recent years the number of potential fields has been extended since the development of low-cost devices allowed the application of Adaptive Optics in other fields such as lasers, confocal microscopy or human vision . Furthermore, there are specific fields where an accurate description of the incoming wavefront is of particular relevance such as Adaptive Optics systems for very large telescopes, and especially in the search for exoplanets, or wavefront sensing for monitoring LASIK surgery.\n\nWe propose a new high resolution wavefront sensor which consists of a telescopic system with a mask at the intermediate focal plane. The incoming field is Fourier transformed by the first lens, then is multiplied by the mask and, finally, is Fourier transformed again. The mask amplitude increases linearly along a certain direction. The wavefront phase derivative is obtained from the detected light intensity at the telescopic system image plane. A rotating filter can be used to provide the derivatives in two orthogonal directions. In practice, the sensor performs an optical differentiation process (that resembles the Foucault knife-edge test principles).\n\nThe optical differentiation is a relatively old technique occasionally used to retrieve phase information. It is worth mention the paper of Bortz et. al. where a phase retrieval technique similar to the one described here was presented. However, the Bortz’s method requires three measurements with different filters and, as a consequence, a complicated expression for the phase derivative is needed.\n\nIn this paper, we analyze the actual implementation of the sensor and the parameters that characterize the mask, we develop an expression for the signal-to-noise ratio of the technique as a function of the mask characterizing parameters for read noise and photon noise and we compare with the Hartmann-Shack SNR under different conditions. In order to perform a complete comparison between the features of both sensors we also compare their dynamic range. Finally, we have developed a computer simulation procedure to compare the Optical Differentiation (OD) sensor performance with that of the Hartmann-Shack (H-S) sensor.\n\nThe main advantages of the new sensor are its high (and adjustable) spatial resolution, the easily adjusting of the dynamic range and that it is able to work with polychromatic sources . Currently, there are other adjustable resolution sensors, [7,8] but they present limitations that this sensor overcomes. The main drawback is the energy loss due to the mask absorption, although the Optical Differentiation sensor presents a SNR comparable, or higher, than that of the Hartmann-Shack provided a proper election of the sensor parameters.\n\n## 2. The optical differentiation sensor\n\nTo describe the theoretical principles of the OD sensor, let us consider the electric field E(x,y)=A ejϕ(x,y), where A is the constant amplitude and ϕ(x,y) is the wavefront phase. The arrangement consists of a pair of achromatic lenses forming a telescopic system and a mask M placed at the intermediate focal plane (Fig. 1). The first lens performs the FT of the input field on the mask, then the product of the transformed field times the mask is Fourier transformed again onto a detection system (CCD). Two separated measurements are required to obtain the wavefront phase slope in two orthogonal directions.", null, "Fig. 1. Set-up of the OD sensor. OF is the amplitude mask for optical differentiation. L1 and L2 are achromatic lenses of equal focal length.\n\nThe masks that perform the differentiation along the x (or y) direction, Mx (or My), have linearly increasing amplitude along the derivative direction and can be described as:\n\n$Mx=2πbrrx+a=2πbux+a$\n$My=2πbrry+a=2πbuy+a$\n\nwhere λ is the wavelength, f the focal distance of the first lens and rx and ry represent real distances in the mask plane. The mask can also be expressed in terms of the spatial frequencies of coordinates x and y in the pupil plane, ux and uy, where b=λfbr. In addition, a and br (or b) are two constant parameters that determine the mask behaviour.\n\nWhen a mask of this kind is placed at the intermediate plane of a telescopic system, due to the differentiation property of the FT, the intensity at the CCD is related to the field derivative along the corresponding mask direction:\n\n$Ix(x,y)=∣FT−1[FT(E(x,y))·Mx]∣2=∣−jb∂E(x,y)∂x+aE(x,y)∣2$\n$Iy(x,y)=∣FT−1[FT(E(x,y))·My]∣2=∣−jb∂E(x,y)∂y+aE(x,y)∣2$\n\nThen, by substituting the field expression, the derivatives of the wavefront phase along orthogonal directions can be obtained from the intensities,\n\n$αx=∂ϕ(x,y)∂x=IxA−ab=IxA−abrλf$\n$αy=∂ϕ(x,y)∂y=IyA−ab=IyA−abrλf$\n\nNote that the wavefront slope can be obtained as the product of the wavefront phase slope, α, times λ/2π, and thus, it is independent on wavelength. It can be seen that the values of br and f control the dynamic range of the derivative estimate. In contrast with the H-S sensor , which is based on the measurement of a centroid position, this is a photometric sensor. Thus, the phase derivative is estimated at each pixel of the detector by comparing the intensity with that corresponding to a flat wavefront portion I 0=(aA)2. The wavefront phase is sampled by the pixels contained in the CCD illuminated area providing very high spatial resolution without limitations of the dynamic range.\n\nThis sensor can also be explained using a ray tracing picture. Note that if achromatic lenses are used each small area of the sensor entrance pupil is directly mapped in one area of the detection plane. In addition, parallel rays (wavefront regions with the same slope) will go to the same point at the filter plane, and thus, will suffer the same attenuation. The intensity at each area of the detection plane provides an average of the wavefront phase slope for the area. When using polychromatic sources, the sensor also provides an average over the whole source bandwidth.\n\nThe mask defined in Eq. (1) is a filter with variable transmittance given by (2πbrr+a)2. We will take the parameter a=0.5, which means that only amplitude filters are considered. Finally, the size of the filter is determined by the values of its parameters. Assuming that the maximum value of the mask is equal to one (in order to minimize the lost energy), its width can be easily derived as W=1/2πbr=λf/(2πb), where it is assumed that the centre of the filter lies on the optical axis.\n\n## 3. Signal-to-noise ratio for the OD sensor\n\nThe two main sources of error in the OD sensor are the CCD read noise and the photon noise.\n\nThe magnitude measured is the slope α as expressed in Eq. (3). The total intensity at each detector area of the sensor, Ix (or Iy) can be expressed as the sum of the intensities, Ii,j, of the N p pixels of that detector area. Its variance, using the standard error propagation formula, can be written as:\n\n$σα2=∑i,j(12Ab∑Ii,j)2σr2=Np(12Ab∑Ii,j)2σr2$\n\nwhere σ r is the read noise error of the CCD. Consequently, the signal-to-noise ratio for the OD sensor when only read-noise is considered can be expressed as:\n\n$SNROD=2〈α〉Ab∑Ii,jNpσr=〈α〉nOD2bNpσr$\n\nwhere nOD is the number of photons arriving at the corresponding area in the entrance pupil of the sensor and <…> means ensemble average. The sampling at the CCD plane can be easily changed using a zoom lens. Then, if a sampling of one pixel per sensor area is considered (N p=1), it is easy to show that:\n\n$SNROD=2〈α〉nODσr(1−a)Dlens2π1.22NA$\n\nwhere D lens is the diameter of the lens used to evaluate the first Fourier transform, and b has been expressed in terms of the number of Airy rings covered by the filter, N A (W=1.22·N A λf/D lens).\n\n#### 3.2. Photon noise\n\nFrom the expression of the slope α given by Eq. (3), its variance, using the standard error propagation formula, can be easily developed as:\n\n$σα2=σI2[2AbI]2$\n\nand the SNR, when detection is affected only by Poisson noise, will be:\n\n$SNROD=〈α〉2Ab=〈α〉nOD2b$\n\nIntroducing the expression of b the SNR is expressed as:\n\n$SNROD=〈α〉nOD(1−a)Dlensπ1.22NA$\n\nTo maximize the SNR an actual filter should have a b value as large as possible. This is carried out taking a=0.5 and making the filter size as small as possible, although a compromise between the energy loss and the filter size is necessary.\n\nTo compare the SNR OD with that of the Hartman-Shack when only photon noise is considered, we set, as a particular case, the filter radius equal to the size of the 15th Airy ring, obtaining the following expression:\n\n$SNROD=〈α〉2Ab=〈α〉nOD0.0087Dlens$\n\n## 4. Signal-to-noise ratio for the H-S sensor\n\nThe wavefront phase slope measured by the Hartmann-Shack (in the x direction) is:\n\n$αx=2πλf1xc$\n\nwhere f l is the focal length of the microlens, λ is the incoming wavelength and x c is the x-position of the spot centroid. The corresponding SNR for read noise is: [11,12]\n\n$SNRH−S=〈α〉3nH−SNtdπσrNw2$\n\nwhere n H-S is the number of photons per microlens in the H-S sensor, N t is the spot size in pixels, d is the diameter of the microlens and N w is the width of the corresponding subaperture area in the CCD in pixels. As an example, in the case of a quad cell the parameters take the values: N w=2 y N t=1.\n\n#### 4.2. Photon noise\n\nIn the case of photon noise the corresponding SNR is: [11,12,13]\n\n$SNRH−S=〈α〉nH−Sd0.86π$\n\nfor circular microlenses, where d is the diameter of the microlens.\n\nIt is necessary to state that both Eq. (12) and Eq. (13), due to the approximations used to develop them [11,12,13], only determine an upper limit to the SNR of the H-S.\n\nThe ratio between the SNR due to photon noise of both sensors is obtained from Eqs. (10) and (13) and:\n\n$SNRODSNRH−S=2b0.86πnODdnH−S=0.0087Dlens0.86πd2$\n\nIn this expression the ratio n OD/n H-S is set to ½ because the light of the OD sensor must be split in two channels.\n\nThe SNR ratio dependence on D lens/d will be analyzed later using a computer simulation. This dependence is similar in the read noise and in the photon noise cases. For this reason, and as most analyses in the literature, we will only consider the photon noise case in the next sections.\n\n## 5. Comparison of dynamic range\n\nThe range of wavefront phase slopes that can be measured also depends on the size of the filter, and, consequently, on b. Thus, the maximum slope that can be measured is α M=(2π/λ) (W/2f)=1/(2b). This relationship between the parameters of the mask and the wavefronts to be measured enables the election of the appropriate mask. Moreover, different masks can be implemented using a LCD. As a result, the dynamic range of the OD sensor can be easily adjusted. If we define the dynamic range as DROD=2α M, we obtain:\n\n$DROD=1b$\n\nIn contrast with H-S, this equation shows that high dynamic range can be attained without loss of the spatial resolution. Now we consider the Hartmann-Shack dynamic range:\n\n$DRH−S=2πdλf1$\n\nHence, a high resolution H-S sensor will present a DR H-S smaller than DR OD when d<(Wf l)/f. Furthermore, when the lens size decreases, the size of the PSF at the microlens image plane increases, reducing even more the actual H-S sensor dynamic range.\n\n## 6. Advantages of high resolution sensing\n\nWe have perfomed a computer simulation to show the advantages of the OD sensor over the H-S sensor, which are especially relevant in high resolution. Four hundred distorted wavefronts following Kolmogorov statistics with D/r 0=1 were generated using Roddier’s technique . The number of Zernike modes that we used in the simulation of the wavefront was N=560, and the three first modes (piston, tip and tilt) are assumed to be corrected. The number of samples in the wavefront was (π/4)×291×291. Then, the phase derivative was estimated both using a H-S sensor and our technique.\n\nThe first step in the comparison was to reproduce the ratio SNR OD/SNR H-S. An analysis of this ratio can be carried out as a function of number of sensing areas covering the sensor entrance pupil. The number of sampling areas (π/4)×N s×N s is the number of microlenses used in the Hartmann-Shack and the number of areas in which the light intensity is detected in the Optical Differentiation sensor where N s=D lens/d. From Eq. (14) we see that the ratio SNROD/SNR H-S depends on the value of N s. Fig. 2 shows the behaviour of SNR OD/SNR H-S obtained from computer simulation for N s=27, 30 and 35 as a function of the number of photons in each sensing area of the OD sensor. We see that for N s=27 the SNR H-S is larger than the SNR OD. However, for N s=30 and 35 the Optical Differentiation sensor provides a SNR better than that corresponding to the H-S sensor. It can be seen that Eq. (14) predicts that the ratio SNR OD/SNR H-S will be larger than one for N s=60. However, the simulation shows that a value over N s=30 is enough to attain this value. The reason of this discrepancy is that the SNR H-S given in Eq. (14) has been evaluated using an approximated procedure so that it only provides an upper limit of the SNR.\n\nFrom this analysis it is straightforward to deduce that a high resolution Hartman-Shack sensor provides a SNR necessarily lower than that of the Optical Differentiation sensor. The physical reason of this behaviour is explained. As resolution increases, the SNR in both sensor decreases because less photons arrive at each detector area. However, for the H-S sensor there is an additional error source: the resolution increasement means a reduction of the lenslet size, which implies a wider and noisier centroid. This additional error source, that the OD sensor overcomes, explains the advantage of the OD sensor in high resolution despite the better light efficiency of the H-S sensor.", null, "Fig. 2. Ratio of SNR of both sensors as a function of the number of photons in each sensing area of the OD sensor for different sensing area number: N s=27 (solid curve), 30 (dashed curve) and 35 (dotted curve).\n\nThe second step is to reconstruct the wavefront phase from a certain number, k, of coefficients obtained from the slopes. The error in the whole process is estimated using the residual phase variance of the reconstructed wavefront phase, defined as:\n\n$σrec2=∫Pupil[ϕ(r→)−ϕrec(r→)]2dr→≈∑i=1N[ai−airec]2$\n\nwhere ai are coefficients of the corresponding Zernike polynomials, a i rec=0 for i>k. Fig. 3 compares the residual variance obtained using the OD and the H-S sensor as a function of the number of reconstructed modes k. The main conclusion is that the accuracy of our technique is very similar to that of the Hartmann-Shack sensor even in adverse conditions (N s=8).\n\nFinally, the third step is to take advantage of the high spatial resolution attainable by our sensor. Fig. 3 also shows the residual variance when the OD sensor number of sampling areas is increased. Since a high resolution sensor samples the wavefront in a considerable number of points, the reconstruction of the incoming wavefront can be performed more accurately. In such a case, the OD sensor not only provides better accuracy in all cases but also allows the estimate of higher order modes.", null, "Fig. 3. Residual phase variance obtained using the H-S (dashed-dot curve) and the OD sensor (solid curve) with 80 sampling areas. The behaviour of OD with a higher resolution is also shown (dotted curve: 112 sampling areas, long-dashed curve: 177 sampling areas). The values of the masks parameters are a=0.5 and b=0.0013 D/2.\n\n## 7. Simulated experiment\n\nIn order to analyse the behaviour of the OD sensor in wavefront compensation, we performed a simulated experiment. An atmospherically distorted wavefront has been generated (Fig. 4(a)). The local phase slopes have been estimated using the OD sensor (Fig. 4(b)), then the phase is reconstructed using a standard procedure . The wavefront is reconstructed after 65 Zernike modes have been compensated (Fig. 4(c)). These figures show that the OD sensor can be a useful tool for those applications in which wavefront sensing is required.", null, "Fig. 4. (a) Atmospherically distorted wavefront. (b) Distorted wavefront slope estimation on the x direction. (c) Compensated wavefront\n\n## 8. Conclusions\n\nWe present the Optical differentiation wavefront sensor that consists of a linearly increasing amplitude mask placed at the focal plane of a telescopic system. The main advantages of this sensor are that the dynamic range and sampling can be easily adjusted. Furthermore it can work with polychromatic sources. This allows us to attain high resolution, and consequently to estimate a large number of wavefront modes, without loss of dynamic range. Moreover, we have shown that for high resolution sensing it presents better SNR and dynamic range than those of the standard Hartmann-Shack sensor, even in adverse photon noise conditions.\n\n## Acknowledgments\n\nThis research was supported by Ministerio de Ciencia y Tecnología grant AYA2000-1565-C02.\n\n1. G. Vdovin, “Micromachined membrane deformable mirrors” in Adaptive Optics Engineering Handbook, R. Tyson, ed. (Marcel Dekker Inc, New York, 1998)\n\n2. M. A. Neil, M. J. Booth, and T. Wilson, “New modal wave-front sensor: a theoretical analysis,” J. Opt. Soc. Am A 17, 1098–1107 (2000) [CrossRef]\n\n3. E. J. Fernandez, I. Iglesias, and P. Artal, “Closed-loop adaptive optics in the human eye,” Opt. Lett. 26, 746–748 (2001) [CrossRef]\n\n4. V. F. Canales and M. P. Cagigal, “Gain estimate for exoplanet detection with adaptive optics,” Astron. Astrophys. Suppl. Ser. 145, 445–449 (2000) [CrossRef]\n\n5. J. C. Bortz and B. J. Thompson, “Phase retrieval by optical phase differentiation,” Proceedings of the SPIE 351, 71–79 (1983) [CrossRef]\n\n6. O. von der Lühe, “Wavefront error measurement technique using extended, incoherent light sources,” Opt. Eng. 27, 1078–1087 (1988).\n\n7. E. N. Ribak “Harnessing caustics for wave-front sensing,” Opt. Lett. 26, 1834–1836 (2001). [CrossRef]\n\n8. R. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. Mod. Opt. 43, 289–293 (1996). [CrossRef]\n\n9. K. Iizuka, Engineering optics (Springer-Verlag, Berlin, 1987)\n\n10. J. M. Geary, Introduction to wavefront sensors (SPIE Press, Washington, 1995). [CrossRef]\n\n11. R. Irwan and R. G. Lane, “Analysis of optimal centroid estimation applied to Shack-Hartmann sensing,” Appl. Opt. 32, 6737–6743 (1999). [CrossRef]\n\n12. J. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am A 7, 1598–1608 (1990) [CrossRef]\n\n13. B. M. Welsh and C. S. Gardner, “Performance analysis of adaptive optics systems using laser guide stars and slope sensors,” J. Opt. Soc. Am A 12, 1913–1923 (1989) [CrossRef]\n\n14. N. Roddier, “Atmospheric wavefront simulation using Zernike polynomials,” Opt. Eng. 29, 1174–1180 (1990) [CrossRef]\n\n15. R. Cubalchini, “Modal wave-front estimation from phase derivative measurements,” J. Opt. Soc. Am 69, 972–977 (1979) [CrossRef]\n\n### References\n\n• View by:\n• |\n• |\n• |\n\n1. G. Vdovin, “Micromachined membrane deformable mirrors” in Adaptive Optics Engineering Handbook, R. Tyson, ed. (Marcel Dekker Inc, New York, 1998)\n2. M. A. Neil, M. J. Booth, and T. Wilson, “New modal wave-front sensor: a theoretical analysis,” J. Opt. Soc. Am A 17, 1098–1107 (2000)\n[Crossref]\n3. E. J. Fernandez, I. Iglesias, and P. Artal, “Closed-loop adaptive optics in the human eye,” Opt. Lett. 26, 746–748 (2001)\n[Crossref]\n4. V. F. Canales and M. P. Cagigal, “Gain estimate for exoplanet detection with adaptive optics,” Astron. Astrophys. Suppl. Ser. 145, 445–449 (2000)\n[Crossref]\n5. J. C. Bortz and B. J. Thompson, “Phase retrieval by optical phase differentiation,” Proceedings of the SPIE 351, 71–79 (1983)\n[Crossref]\n6. O. von der Lühe, “Wavefront error measurement technique using extended, incoherent light sources,” Opt. Eng. 27, 1078–1087 (1988).\n7. E. N. Ribak “Harnessing caustics for wave-front sensing,” Opt. Lett. 26, 1834–1836 (2001).\n[Crossref]\n8. R. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. Mod. Opt. 43, 289–293 (1996).\n[Crossref]\n9. K. Iizuka, Engineering optics (Springer-Verlag, Berlin, 1987)\n10. J. M. Geary, Introduction to wavefront sensors (SPIE Press, Washington, 1995).\n[Crossref]\n11. R. Irwan and R. G. Lane, “Analysis of optimal centroid estimation applied to Shack-Hartmann sensing,” Appl. Opt. 32, 6737–6743 (1999).\n[Crossref]\n12. J. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am A 7, 1598–1608 (1990)\n[Crossref]\n13. B. M. Welsh and C. S. Gardner, “Performance analysis of adaptive optics systems using laser guide stars and slope sensors,” J. Opt. Soc. Am A 12, 1913–1923 (1989)\n[Crossref]\n14. N. Roddier, “Atmospheric wavefront simulation using Zernike polynomials,” Opt. Eng. 29, 1174–1180 (1990)\n[Crossref]\n15. R. Cubalchini, “Modal wave-front estimation from phase derivative measurements,” J. Opt. Soc. Am 69, 972–977 (1979)\n[Crossref]\n\n#### 2000 (2)\n\nV. F. Canales and M. P. Cagigal, “Gain estimate for exoplanet detection with adaptive optics,” Astron. Astrophys. Suppl. Ser. 145, 445–449 (2000)\n[Crossref]\n\nM. A. Neil, M. J. Booth, and T. Wilson, “New modal wave-front sensor: a theoretical analysis,” J. Opt. Soc. Am A 17, 1098–1107 (2000)\n[Crossref]\n\n#### 1999 (1)\n\nR. Irwan and R. G. Lane, “Analysis of optimal centroid estimation applied to Shack-Hartmann sensing,” Appl. Opt. 32, 6737–6743 (1999).\n[Crossref]\n\n#### 1996 (1)\n\nR. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. Mod. Opt. 43, 289–293 (1996).\n[Crossref]\n\n#### 1990 (2)\n\nJ. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am A 7, 1598–1608 (1990)\n[Crossref]\n\nN. Roddier, “Atmospheric wavefront simulation using Zernike polynomials,” Opt. Eng. 29, 1174–1180 (1990)\n[Crossref]\n\n#### 1989 (1)\n\nB. M. Welsh and C. S. Gardner, “Performance analysis of adaptive optics systems using laser guide stars and slope sensors,” J. Opt. Soc. Am A 12, 1913–1923 (1989)\n[Crossref]\n\n#### 1988 (1)\n\nO. von der Lühe, “Wavefront error measurement technique using extended, incoherent light sources,” Opt. Eng. 27, 1078–1087 (1988).\n\n#### 1983 (1)\n\nJ. C. Bortz and B. J. Thompson, “Phase retrieval by optical phase differentiation,” Proceedings of the SPIE 351, 71–79 (1983)\n[Crossref]\n\n#### 1979 (1)\n\nR. Cubalchini, “Modal wave-front estimation from phase derivative measurements,” J. Opt. Soc. Am 69, 972–977 (1979)\n[Crossref]\n\n#### Booth, M. J.\n\nM. A. Neil, M. J. Booth, and T. Wilson, “New modal wave-front sensor: a theoretical analysis,” J. Opt. Soc. Am A 17, 1098–1107 (2000)\n[Crossref]\n\n#### Bortz, J. C.\n\nJ. C. Bortz and B. J. Thompson, “Phase retrieval by optical phase differentiation,” Proceedings of the SPIE 351, 71–79 (1983)\n[Crossref]\n\n#### Cagigal, M. P.\n\nV. F. Canales and M. P. Cagigal, “Gain estimate for exoplanet detection with adaptive optics,” Astron. Astrophys. Suppl. Ser. 145, 445–449 (2000)\n[Crossref]\n\n#### Canales, V. F.\n\nV. F. Canales and M. P. Cagigal, “Gain estimate for exoplanet detection with adaptive optics,” Astron. Astrophys. Suppl. Ser. 145, 445–449 (2000)\n[Crossref]\n\n#### Cubalchini, R.\n\nR. Cubalchini, “Modal wave-front estimation from phase derivative measurements,” J. Opt. Soc. Am 69, 972–977 (1979)\n[Crossref]\n\n#### Fontanella, J. C.\n\nJ. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am A 7, 1598–1608 (1990)\n[Crossref]\n\n#### Gardner, C. S.\n\nB. M. Welsh and C. S. Gardner, “Performance analysis of adaptive optics systems using laser guide stars and slope sensors,” J. Opt. Soc. Am A 12, 1913–1923 (1989)\n[Crossref]\n\n#### Geary, J. M.\n\nJ. M. Geary, Introduction to wavefront sensors (SPIE Press, Washington, 1995).\n[Crossref]\n\n#### Iizuka, K.\n\nK. Iizuka, Engineering optics (Springer-Verlag, Berlin, 1987)\n\n#### Irwan, R.\n\nR. Irwan and R. G. Lane, “Analysis of optimal centroid estimation applied to Shack-Hartmann sensing,” Appl. Opt. 32, 6737–6743 (1999).\n[Crossref]\n\n#### Lane, R. G.\n\nR. Irwan and R. G. Lane, “Analysis of optimal centroid estimation applied to Shack-Hartmann sensing,” Appl. Opt. 32, 6737–6743 (1999).\n[Crossref]\n\n#### Neil, M. A.\n\nM. A. Neil, M. J. Booth, and T. Wilson, “New modal wave-front sensor: a theoretical analysis,” J. Opt. Soc. Am A 17, 1098–1107 (2000)\n[Crossref]\n\n#### Primot, J.\n\nJ. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am A 7, 1598–1608 (1990)\n[Crossref]\n\n#### Ragazzoni, R.\n\nR. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. Mod. Opt. 43, 289–293 (1996).\n[Crossref]\n\n#### Roddier, N.\n\nN. Roddier, “Atmospheric wavefront simulation using Zernike polynomials,” Opt. Eng. 29, 1174–1180 (1990)\n[Crossref]\n\n#### Rousset, G.\n\nJ. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am A 7, 1598–1608 (1990)\n[Crossref]\n\n#### Thompson, B. J.\n\nJ. C. Bortz and B. J. Thompson, “Phase retrieval by optical phase differentiation,” Proceedings of the SPIE 351, 71–79 (1983)\n[Crossref]\n\n#### Vdovin, G.\n\nG. Vdovin, “Micromachined membrane deformable mirrors” in Adaptive Optics Engineering Handbook, R. Tyson, ed. (Marcel Dekker Inc, New York, 1998)\n\n#### von der Lühe, O.\n\nO. von der Lühe, “Wavefront error measurement technique using extended, incoherent light sources,” Opt. Eng. 27, 1078–1087 (1988).\n\n#### Welsh, B. M.\n\nB. M. Welsh and C. S. Gardner, “Performance analysis of adaptive optics systems using laser guide stars and slope sensors,” J. Opt. Soc. Am A 12, 1913–1923 (1989)\n[Crossref]\n\n#### Wilson, T.\n\nM. A. Neil, M. J. Booth, and T. Wilson, “New modal wave-front sensor: a theoretical analysis,” J. Opt. Soc. Am A 17, 1098–1107 (2000)\n[Crossref]\n\n#### Appl. Opt. (1)\n\nR. Irwan and R. G. Lane, “Analysis of optimal centroid estimation applied to Shack-Hartmann sensing,” Appl. Opt. 32, 6737–6743 (1999).\n[Crossref]\n\n#### Astron. Astrophys. Suppl. Ser. (1)\n\nV. F. Canales and M. P. Cagigal, “Gain estimate for exoplanet detection with adaptive optics,” Astron. Astrophys. Suppl. Ser. 145, 445–449 (2000)\n[Crossref]\n\n#### J. Mod. Opt. (1)\n\nR. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. Mod. Opt. 43, 289–293 (1996).\n[Crossref]\n\n#### J. Opt. Soc. Am (1)\n\nR. Cubalchini, “Modal wave-front estimation from phase derivative measurements,” J. Opt. Soc. Am 69, 972–977 (1979)\n[Crossref]\n\n#### J. Opt. Soc. Am A (3)\n\nJ. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am A 7, 1598–1608 (1990)\n[Crossref]\n\nB. M. Welsh and C. S. Gardner, “Performance analysis of adaptive optics systems using laser guide stars and slope sensors,” J. Opt. Soc. Am A 12, 1913–1923 (1989)\n[Crossref]\n\nM. A. Neil, M. J. Booth, and T. Wilson, “New modal wave-front sensor: a theoretical analysis,” J. Opt. Soc. Am A 17, 1098–1107 (2000)\n[Crossref]\n\n#### Opt. Eng. (2)\n\nO. von der Lühe, “Wavefront error measurement technique using extended, incoherent light sources,” Opt. Eng. 27, 1078–1087 (1988).\n\nN. Roddier, “Atmospheric wavefront simulation using Zernike polynomials,” Opt. Eng. 29, 1174–1180 (1990)\n[Crossref]\n\n#### Proceedings of the SPIE (1)\n\nJ. C. Bortz and B. J. Thompson, “Phase retrieval by optical phase differentiation,” Proceedings of the SPIE 351, 71–79 (1983)\n[Crossref]\n\n#### Other (3)\n\nK. Iizuka, Engineering optics (Springer-Verlag, Berlin, 1987)\n\nJ. M. Geary, Introduction to wavefront sensors (SPIE Press, Washington, 1995).\n[Crossref]\n\nG. Vdovin, “Micromachined membrane deformable mirrors” in Adaptive Optics Engineering Handbook, R. Tyson, ed. (Marcel Dekker Inc, New York, 1998)\n\n### Cited By\n\nOSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.\n\n### Figures (4)\n\nFig. 1. Set-up of the OD sensor. OF is the amplitude mask for optical differentiation. L1 and L2 are achromatic lenses of equal focal length.\nFig. 2. Ratio of SNR of both sensors as a function of the number of photons in each sensing area of the OD sensor for different sensing area number: N s=27 (solid curve), 30 (dashed curve) and 35 (dotted curve).\nFig. 3. Residual phase variance obtained using the H-S (dashed-dot curve) and the OD sensor (solid curve) with 80 sampling areas. The behaviour of OD with a higher resolution is also shown (dotted curve: 112 sampling areas, long-dashed curve: 177 sampling areas). The values of the masks parameters are a=0.5 and b=0.0013 D/2.\nFig. 4. (a) Atmospherically distorted wavefront. (b) Distorted wavefront slope estimation on the x direction. (c) Compensated wavefront\n\n### Equations (20)\n\n$M x = 2 π b r r x + a = 2 π b u x + a$\n$M y = 2 π b r r y + a = 2 π b u y + a$\n$I x ( x , y ) = ∣ FT − 1 [ FT ( E ( x , y ) ) · M x ] ∣ 2 = ∣ − j b ∂ E ( x , y ) ∂ x + a E ( x , y ) ∣ 2$\n$I y ( x , y ) = ∣ FT − 1 [ FT ( E ( x , y ) ) · M y ] ∣ 2 = ∣ − j b ∂ E ( x , y ) ∂ y + a E ( x , y ) ∣ 2$\n$α x = ∂ ϕ ( x , y ) ∂ x = I x A − a b = I x A − a b r λ f$\n$α y = ∂ ϕ ( x , y ) ∂ y = I y A − a b = I y A − a b r λ f$\n$σ α 2 = ∑ i , j ( 1 2 Ab ∑ I i , j ) 2 σ r 2 = N p ( 1 2 Ab ∑ I i , j ) 2 σ r 2$\n$SNR OD = 2 〈 α 〉 Ab ∑ I i , j N p σ r = 〈 α 〉 n OD 2 b N p σ r$\n$SNR OD = 2 〈 α 〉 n OD σ r ( 1 − a ) D lens 2 π 1.22 N A$\n$σ α 2 = σ I 2 [ 2 Ab I ] 2$\n$SNR OD = 〈 α 〉 2 Ab = 〈 α 〉 n OD 2 b$\n$SNR OD = 〈 α 〉 n OD ( 1 − a ) D lens π 1.22 N A$\n$SNR OD = 〈 α 〉 2 Ab = 〈 α 〉 n OD 0.0087 D lens$\n$α x = 2 π λ f 1 x c$\n$SNR H − S = 〈 α 〉 3 n H − S N t d π σ r N w 2$\n$SNR H − S = 〈 α 〉 n H − S d 0.86 π$\n$SNR OD SNR H − S = 2 b 0.86 π n OD d n H − S = 0.0087 D lens 0.86 π d 2$\n$D R OD = 1 b$\n$D R H − S = 2 π d λ f 1$\n$σ rec 2 = ∫ Pupil [ ϕ ( r → ) − ϕ rec ( r → ) ] 2 d r → ≈ ∑ i = 1 N [ a i − a i rec ] 2$" ]
[ null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89356625,"math_prob":0.9300245,"size":18245,"snap":"2020-45-2020-50","text_gpt3_token_len":4212,"char_repetition_ratio":0.1442355,"word_repetition_ratio":0.015573227,"special_character_ratio":0.22663744,"punctuation_ratio":0.123548925,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.98881066,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T19:49:21Z\",\"WARC-Record-ID\":\"<urn:uuid:a6babef9-b0ed-4ce3-97ea-a1a7b20b9182>\",\"Content-Length\":\"219179\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9b614b0-1916-490c-b4a8-354f9ed0bdf2>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a520d82-edf3-4f0d-be52-7de27e8f82f1>\",\"WARC-IP-Address\":\"65.202.222.160\",\"WARC-Target-URI\":\"http://proxy.osapublishing.org/oe/fulltext.cfm?uri=oe-11-21-2783&id=77672\",\"WARC-Payload-Digest\":\"sha1:ETVUATDPCK4WOF2PG6UVNUFAAVEPOK7D\",\"WARC-Block-Digest\":\"sha1:FNQTMHGBUKBKNMW5VMON4WL6EP5W3J24\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141202590.44_warc_CC-MAIN-20201129184455-20201129214455-00550.warc.gz\"}"}
https://dvcs.w3.org/hg/prov/rev/5279a2172dc6
[ "author James Cheney Wed, 29 Aug 2012 18:30:30 +0100 changeset 4373 5279a2172dc6 parent 4372 fbe20385e1f3 child 4374 9d2967fd1345\n* work on overview section\n model/prov-constraints.html\n```--- a/model/prov-constraints.html\tWed Aug 29 08:19:06 2012 -0400\n+++ b/model/prov-constraints.html\tWed Aug 29 18:30:30 2012 +0100\n@@ -791,7 +791,7 @@\nnamed event such as usage, generation, or a relationship such as\nThis specification includes <a href=\"#type-constraints\">disjointness and typing constraints</a> that\n-check these requirements. Here, we merely\n+check these requirements. Here, we\nsummarize the type constraints in <a href=\"#typing-table\">Table 1</a>.\n</p>\n\n@@ -874,8 +874,8 @@\n</tr>\n<tr align=\"center\">\n<td rowspan=\"2\" class=\"name\">wasInvalidatedBy(id; e,a,t,attrs)</td>\n-\t<td class=\"name\">a</td>\n-\t<td class=\"name\">'activity'</td>\n+\t<td class=\"name\">e</td>\n+\t<td class=\"name\">'entity'</td>\n</tr>\n<tr align=\"center\">\n<td class=\"name\">a</td>\n@@ -981,14 +981,6 @@\n<section>\n<h4>Validation Process Overview</h4>\n\n-<div style=\"text-align: center;\">\n-<figure>\n-<img src=\"images/constraints/prov-c.graffle.svg/overview.svg\" alt=\"validation process overview\" />\n-<br>\n-<figcaption id=\"validation-process-overview\">Overview of the Validation Process</figcaption>\n-</figure> <!-- <b>new Figure 1:</b> -->\n-</div>\n-\n<div class=\"note\">\nIn progress, outline to be filled in\n</div>\n@@ -1020,11 +1012,11 @@\n\n<p>Several definitions and inferences conclude by saying that some\nobjects exist such that some other formulas hold. Such an inference\n- introduces fresh <em>existential variables</em> into the instance.\n- Such a variable denotes a fixed object that exists, but its exact\n- identity is unknown, so we often refer to such variables as\n- <em>existential variables</em>. We do not allow existential\n- variables that stand for unknown attribute lists. </p>\n+ introduces fresh <a>existential variable</a>s into the instance. An\n+ existential variable denotes a fixed object that exists, but its\n+ exact identity is unknown. Existential variables can stand for\n+ unknown identifiers or literal values only; we do not allow\n+ existential variables that stand for unknown attribute lists. </p>\n\n<p>In particular, many\noccurrences of the placeholder symbol <span class=\"name\">-</span> stand for unknown\n@@ -1035,24 +1027,33 @@\nplaces.\n</p>\n<p>An expression is called a <em>term</em> if it is either a\n- constant identifier, literal, placeholder, or variable, and write\n+ constant identifier, literal, placeholder, or variable. We write\n<span class=\"math\">t</span> to denote an arbitrary term.\n</p>\n\n<h4>Substitution</h4>\n-<p>A substitution is a function that maps variables to terms. Concretely, since we only\n+<p>A <em>substitution</em> is a function that maps variables to terms. Concretely, since we only\nneed to consider substitutions of finite sets of variables, we can\n- write substitutions as <span class=\"math\">[x_1 = t_1,...,x_n=t_n]</span>. A substitution\n- <span class=\"math\">S = [x_1 = t_1,...,x_n=t_n]</span>\n+ write substitutions as <span class=\"math\">[x<sub>1</sub> = t<sub>1</sub>,...,x<sub>n</sub>=t<sub>n</sub>]</span>. A substitution\n+ <span class=\"math\">S = [x<sub>1</sub> = t<sub>1</sub>,...,x<sub>n</sub>=t<sub>n</sub>]</span>\ncan be <em>applied</em> to a term as follows.\n+</p>\n<ol><li>\n- If the term is a variable <span class=\"math\">x_i</span>, one of the variables in the\n- domain of <span class=\"math\">S</span>, then <span class=\"math\">S(x) = t_i</span>.\n+ If the term is a variable <span class=\"math\">x<sub>i</sub></span>, one of the variables in the\n+ domain of <span class=\"math\">S</span>, then <span class=\"math\">S(x<sub>i</sub>) = t<sub>i</sub></span>.\n</li>\n- <li>\n+ <li>If the term is a constant identifier or literal <span\n+ class=\"math\">c</span>, then <span class=\"math\">S(c) = c</span>.\n</li>\n</ol>\n+<p>\n+ In addition, a substitution can be applied to an atomic formula\n+ (PROV statement) <span class=\"math\">p(t<sub>1</sub>,...,t<sub>n</sub>)</span> by applying it to each term,\n+ that is, <span class=\"math\">S(p(t<sub>1</sub>,...,t<sub>n</sub>)) = p(S(t<sub>1</sub>),...,t<sub>n</sub>)</span>. Likewise, a\n+ substitution <span class=\"math\">S</span> can be applied to an instance <span class=\"math\">I</span> by applying\n+ it to each atomic formula (PROV statement) in <span class=\"math\">I</span>, that is, <span class=\"math\">S(I)\n+ = {S(A) | A ∈ I}</span>.\n</p>\n\n@@ -1071,30 +1072,36 @@\n<p>The atomic constraints considered in this specification can be\nviewed as atomic formulas:</p>\n<ul>\n- <li>Uniqueness constraints...</li>\n- <li>Ordering constraints...\n+ <li>Uniqueness constraints employ atomic equational formulas <span class=\"math\">t =\n+ t'</span>.</li>\n+ <li>Ordering constraints employ atomic precedence relations that can\n+ be thought of as binary formulas <span class=\"math\">precedes(t,t')</span> or <span class=\"math\">strictly_precedes(t,t')</span>\n</li>\n- <li>Typing constraints...\n+ <li>Typing constraints employ the set-valued <span class=\"name\">typeOf</span> function, but\n+ the property <span class=\"name\">type ∈ typeOf(id)</span> can be represented as a binary\n+ relation <span class=\"name\">typeOf(id,type)</span>.\n</li>\n- <li>Impossibility constraints</li>\n+ <li>Impossibility constraints employ the conclusion <span class=\"name\">INVALID</span>,\n+ which is equivalent to the logical constant <span class=\"math\">False</span>. </li>\n</ul>\n<p> Similarly, the definitions, inferences, and constraint rules in this\nspecification can also be viewed as logical formulas.</p>\n<ul>\n<li>\n- A definition of the form \"A IF AND ONLY IF there\n- exists y1...ym such that B1 and ... and Bk\"\n- can be thought of as a formula \"\\forall x1,....,xn. A <==> \\exists y1...ym such that B1 and ... and Bn\", where x1...xn are the\n+ A definition of the form \"<span class=\"name\">A</span> <span class=\"conditional\">IF AND ONLY IF</span> there\n+ exists <span class=\"name\">y<sub>1</sub></span>...<span class=\"name\">y<sub>m</sub></span> such that <span class=\"name\">B<sub>1</sub></span> and ... and <span class=\"name\">B<sub>k</sub></span>\"\n+ can be thought of as a formula \"<span class=\"math\">∀ x<sub>1</sub>,....,x<sub>n</sub>. A ⇔ ∃ y<sub>1</sub>...y<sub>m</sub> . B<sub>1</sub> ∧ ... ∧ B<sub>k</sub></span>\", where <span class=\"math\">x<sub>1</sub></span>...<span class=\"math\">x<sub>n</sub></span> are the\nfree variables of the formula.\n</li>\n-<li>An inference of the form \"IF A THEN there\n- exists y1...ym such that B1 and ... and Bk\" can be thought of as a formula \"\\forall x1,....,xn. A ==> \\exists y1...ym such that B1 and ... and Bn\", where x1...xn are the\n+<li>An inference of the form \"<span class=\"conditional\">IF</span> <span class=\"name\">A<sub>1</sub></span> and ... and <span class=\"name\">A<sub>l</sub></span> <span class=\"conditional\">THEN</span> there\n+ exists <span class=\"name\">y<sub>1</sub></span>...<span class=\"name\">y<sub>m</sub></span> such that <span class=\"name\">B<sub>1</sub></span> and ... and <span class=\"name\">B<sub>k</sub></span>\" can\n+ be thought of as a formula \"<span class=\"math\">∀ x<sub>1</sub>,....,x<sub>n</sub>. A<sub>1</sub> ∧ ... ∧ A<sub>l</sub> ⇒ ∃ y<sub>1</sub>...y<sub>m</sub> . B<sub>1</sub> ∧ ... ∧ B<sub>k</sub></span>\", where <span class=\"math\">x<sub>1</sub></span>...<span class=\"math\">x<sub>n</sub></span> are the\nfree variables of the formula.\n</li>\n-<li>A uniqueness, ordering, or typing constraint of the form \"IF A THEN C\" can be viewed as a formula\n- \"forall x1...xn. A ==> C\". </li>\n-<li>A constraint of the form \"IF A THEN INVALID\" can be viewed as a formula\n- \"forall x1...xn. A ==> false\". </li>\n+<li>A uniqueness, ordering, or typing constraint of the form \"<span class=\"conditional\">IF</span> <span class=\"name\">A</span> <span class=\"conditional\">THEN</span> <span class=\"name\">C</span>\" can be viewed as a formula\n+ \"<span class=\"math\">∀ x<sub>1</sub>...x<sub>n</sub>. A ⇒ C</span>\". </li>\n+<li>A constraint of the form \"<span class=\"conditional\">IF</span> <span class=\"name\">A</span> <span class=\"conditional\">THEN INVALID</span>\" can be viewed as a formula\n+ \"<span class=\"math\">∀ x<sub>1</sub>...x<sub>n</sub>. A ⇒ False</span>\". </li>\n</ul>\n\n@@ -1188,6 +1195,16 @@\nindirectly referenced in other relations.\n</p>\n\n+\n+<div style=\"text-align: center;\">\n+<figure>\n+<img src=\"images/constraints/prov-c.graffle.svg/overview.svg\" alt=\"validation process overview\" />\n+<br>\n+<figcaption id=\"validation-process-overview\">Overview of the Validation Process</figcaption>\n+</figure> <!-- <b>new Figure 1:</b> -->\n+</div>\n+\n+\n<h4>Checking ordering, typing, and impossibility constraints</h4>\n<p>\nThe ordering, typing, and impossibility constraints are checked\n@@ -1231,7 +1248,7 @@\nlogic, if we consider normalized PROV instances with existential\nvariables to represent sets of possible situations, then two normal\nforms may describe the same situation but differ in inessential\n- details such as the order of statements or of elemennts of\n+ details such as the order of statements or of elements of\nattribute-value lists. To remedy this, we can easily consider\ninstances to be equvalent up to reordering of attributes. However,\ninstances can also be equvalent if they differ only in choice of```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6723428,"math_prob":0.8597506,"size":8973,"snap":"2021-43-2021-49","text_gpt3_token_len":2710,"char_repetition_ratio":0.21228677,"word_repetition_ratio":0.21270962,"special_character_ratio":0.32998997,"punctuation_ratio":0.1604664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99359953,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T05:40:49Z\",\"WARC-Record-ID\":\"<urn:uuid:1eb1631e-f749-4e03-aeb2-e8e7edfc285b>\",\"Content-Length\":\"25834\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1acadd0b-6bae-43e6-af18-9360eafc5a19>\",\"WARC-Concurrent-To\":\"<urn:uuid:09d0f06f-31c4-4a4e-b144-43d60a4b6935>\",\"WARC-IP-Address\":\"104.18.23.19\",\"WARC-Target-URI\":\"https://dvcs.w3.org/hg/prov/rev/5279a2172dc6\",\"WARC-Payload-Digest\":\"sha1:QXLPPT6ILO4LN3KN3U5BCADQKGAMLW4Z\",\"WARC-Block-Digest\":\"sha1:5GNTLEMQYCXMBQ66HHIPA7QBV2ALE373\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585242.44_warc_CC-MAIN-20211019043325-20211019073325-00334.warc.gz\"}"}
https://lbs-to-kg.appspot.com/1670-lbs-to-kg.html
[ "Pounds To Kg\n\n# 1670 lbs to kg1670 Pounds to Kilograms\n\nlbs\n=\nkg\n\n## How to convert 1670 pounds to kilograms?\n\n 1670 lbs * 0.45359237 kg = 757.4992579 kg 1 lbs\nA common question is How many pound in 1670 kilogram? And the answer is 3681.71977849 lbs in 1670 kg. Likewise the question how many kilogram in 1670 pound has the answer of 757.4992579 kg in 1670 lbs.\n\n## How much are 1670 pounds in kilograms?\n\n1670 pounds equal 757.4992579 kilograms (1670lbs = 757.4992579kg). Converting 1670 lb to kg is easy. Simply use our calculator above, or apply the formula to change the length 1670 lbs to kg.\n\n## Convert 1670 lbs to common mass\n\nUnitMass\nMicrogram7.574992579e+11 µg\nMilligram757499257.9 mg\nGram757499.2579 g\nOunce26720.0 oz\nPound1670.0 lbs\nKilogram757.4992579 kg\nStone119.285714286 st\nUS ton0.835 ton\nTonne0.7574992579 t\nImperial ton0.7455357143 Long tons\n\n## What is 1670 pounds in kg?\n\nTo convert 1670 lbs to kg multiply the mass in pounds by 0.45359237. The 1670 lbs in kg formula is [kg] = 1670 * 0.45359237. Thus, for 1670 pounds in kilogram we get 757.4992579 kg.\n\n## 1670 Pound Conversion Table", null, "## Alternative spelling\n\n1670 lbs to kg, 1670 lbs in kg, 1670 lbs to Kilogram, 1670 lbs in Kilogram, 1670 lb to Kilograms, 1670 lb in Kilograms, 1670 Pound to kg, 1670 Pound in kg, 1670 Pounds to Kilograms, 1670 Pounds in Kilograms, 1670 Pounds to kg, 1670 Pounds in kg, 1670 lb to Kilogram, 1670 lb in Kilogram, 1670 lbs to Kilograms, 1670 lbs in Kilograms, 1670 Pound to Kilogram, 1670 Pound in Kilogram" ]
[ null, "https://lbs-to-kg.appspot.com/image/1670.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79549867,"math_prob":0.71214485,"size":1136,"snap":"2019-51-2020-05","text_gpt3_token_len":378,"char_repetition_ratio":0.26501766,"word_repetition_ratio":0.009803922,"special_character_ratio":0.4278169,"punctuation_ratio":0.15873016,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.959883,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T16:58:45Z\",\"WARC-Record-ID\":\"<urn:uuid:a3589b9a-1e8f-4381-a552-b2ae4c20581a>\",\"Content-Length\":\"28453\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1e6abd3-a9c1-4eb0-87c7-13889af8401a>\",\"WARC-Concurrent-To\":\"<urn:uuid:1083c189-4341-4044-b92b-b331abec7a3e>\",\"WARC-IP-Address\":\"172.217.7.180\",\"WARC-Target-URI\":\"https://lbs-to-kg.appspot.com/1670-lbs-to-kg.html\",\"WARC-Payload-Digest\":\"sha1:S2U5TDIHLB47P5LEAYO7B6P4WG6U2V7F\",\"WARC-Block-Digest\":\"sha1:AOUASZIHNVETQD5AD5ZQ5XDAWIBYVYXY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541281438.51_warc_CC-MAIN-20191214150439-20191214174439-00417.warc.gz\"}"}
https://www.physicstutorials.org/pt/index.php?m=51
[ "## Centripetal Force\n\nCentripetal Force:\n\nSo far we have talked about angular speed, tangential speed and centripetal acceleration. As I mentioned in Newton’s Second Law of motion, if there is a net force than our mass has acceleration. In this case we find the acceleration first, so if there is acceleration then we can say there must be also a net force causing that acceleration. The direction of this net force is same as the direction of acceleration which is towards to the center. Look at the given picture that shows the directions of force and acceleration of an object doing circular motion in vertical. Don’t forget! Direction of acceleration and force is always same.", null, "From the Newton’s Second Law of Motion;\n\nF=m.a where; m is mass of the object, r is the radius of the circle, T is\n\nFc=-m4π²r/T² or the period, V is the tangential speed\n\nFc=mv²/r\n\nLook at the given examples of centripetal force.", null, "First picture shows the motion of a stone tied up with a string doing circular motion. T represents the tension of the string towards to the center. In this case centripetal force is equal to the tension in the rope. In second picture, a car has circular motion. Force exerted by the friction to the tiers of the car makes it do circular motion. Only force towards to the center is friction force. Thus, in this case our centripetal force becomes the friction force. We can increase the number of examples. For example, electrical forces or gravitational force towards to the center can be centripetal force of that system.\n\nExample: Two objects A and B do circular motion with constant tangential speeds. Object A has mass 2m and radius R and object B has mass 3m and radius 2R. If the centripetal forces of these objects are the same find the ratio of the tangential speed of these objects.", null, "Example: A car makes a turn on a curve of having radius 8m. If the car does not slide find the tangential velocity of it. (Coefficient of friction between the road and the tiers of the car =0, 2 and g=10m/s²)", null, "Circular Motion on Inclined Planes\n\nWe examine this subject with an example. Look at the given picture and analyze the forces shown on the picture.", null, "As you can see from the picture given above, we showed the forces acting on the car. For having safe turn on the curve car must have the value given above which is the top limit. It can also have less speed than given above. If we want to increase the speed of the turn we should increase the slope of the road.\n\nExample: Car having mass 1500kg makes a turn on the road having radius 150m and slope 20º. What is the maximum speed that car can have while turning for safe trip?", null, "Rotational Motion Exams and Solutions\n\nAuthor:" ]
[ null, "https://www.physicstutorials.org/pt/images/Rotational_Motion/centripetalforce.png", null, "https://www.physicstutorials.org/pt/images/Rotational_Motion/fcentripetalimages.png", null, "https://www.physicstutorials.org/pt/images/Rotational_Motion/rotationexample4.png", null, "https://www.physicstutorials.org/pt/images/Rotational_Motion/rotationexample5son.png", null, "https://www.physicstutorials.org/pt/images/Rotational_Motion/fcentripetalinclinedimage.png", null, "https://www.physicstutorials.org/pt/images/Rotational_Motion/rotationexample6.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8831949,"math_prob":0.9810516,"size":3067,"snap":"2019-51-2020-05","text_gpt3_token_len":695,"char_repetition_ratio":0.1452824,"word_repetition_ratio":0.011560693,"special_character_ratio":0.21095534,"punctuation_ratio":0.082474224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980169,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,6,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T07:12:32Z\",\"WARC-Record-ID\":\"<urn:uuid:9260d65a-443d-48e4-801d-6ba1ac0ba8ac>\",\"Content-Length\":\"10922\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a169fbb1-c6e6-4d63-9343-0899c4c6167b>\",\"WARC-Concurrent-To\":\"<urn:uuid:a217907a-2d4d-460d-9c36-7d036e48cc2a>\",\"WARC-IP-Address\":\"104.24.115.25\",\"WARC-Target-URI\":\"https://www.physicstutorials.org/pt/index.php?m=51\",\"WARC-Payload-Digest\":\"sha1:AMCN4T6AWM4YTL4H4ZHQ7U4RLC767RFD\",\"WARC-Block-Digest\":\"sha1:2WMRXRFXTUYB5C3JNY36KIWZPAGK4MOW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540585566.60_warc_CC-MAIN-20191214070158-20191214094158-00230.warc.gz\"}"}
https://freeprojectscodes.com/one-dimensional-array-in-c-and-c/
[ "", null, "# One dimensional array in C and C++ to store name and marks of 3 subject.\n\n#### Assignment – 1 :\n\nOne dimensional array in C and C++ to store name of student and marks of Three subject. and give output in total, average marks of student and whole class.\n\n## A teacher needs a program to record marks for a class of 30 students who have sat threecomputerscience tests.Write and test a program for the teacher.\n\n• Your program must include appropriate prompts for the entry of data.\n• Error messages and other output need to be set out clearly and understandably.\n• All variables, constants and other identifiers must have meaningful names\nYou will need to complete these three tasks. Each task must be fully tested.\n\n### TASK 1 – Set up arrays:\n\nSet-up one dimensional arrays to store:\n• Student names\n• Student marks for Test 1, Test 2 and Test 3\no Test 1 is out of 20 marks\no Test 2 is out of 25 marks\no Test 3 is out of 35 marks\n• Total score for each student\nInput and store the names for 30 students. You may assume that the students’ names are\nunique.\nInput and store the students’ marks for Test 1, Test 2 and Test 3. All the marks must be\nvalidated on\nentry and any invalid marks rejected.\n\nCalculate the total score for each student and store in the array.\nCalculate the average total score for the whole class.\nOutput each student’s name followed by their total score.\nOutput the average total score for the class.\n\nSelect the student with the highest total score and output their name and total score.\n\n## Output of above Question:\n\n##### Screen shot of output:", null, "One dimensional array in C and C++ to store name of student and marks of 3 subject\n\n# Answer for One dimensional array in C and C++ to store name and marks of 3 subject.\n\n``````#include <iostream>\n#include <string>\nusing namespace std;\n\nint main()\n{\nint count = 5; // set number of student as you want\nstring students[count]; // holds the number of students in array eg: (Kishan, Nashib, Laxman)\n\ncout << \"------------------------------------------\" << endl;\ncout << \"-- Welcome to Student Management System --\" << endl;\ncout << \"------------------------------------------\" << endl;\n\ncout << \"\\nEnter the name of the students: \" << endl;\nfor (int i = 0; i < count; i++) //4 i++ (1 incr)\n{\ncout << \"Name Of Student \" << i + 1 << \": \";\ngetline(cin,students[i]);\n}\n\n//For test 1\ndouble test1Arr[count];\ncout << \"\\n*** Test 1 Marks (out of 20) ***\" << endl;\nfor (int i = 0; i < count; i++)\n{\ncout << \"Enter Marks obtained by \" << students[i] << \" : \";\ncin >> test1Arr[i];\n}\n\n//For test 2\ndouble test2Arr[count];\ncout << \"\\n*** Test 2 Marks (out of 25) ***\" << endl;\nfor (int i = 0; i < count; i++)\n{\ncout << \"Enter Marks obtained by \" << students[i] << \" : \";\ncin >> test2Arr[i];\n}\n\n//For test 3\ndouble test3Arr[count];\ncout << \"\\n\\n*** Test 3 Marks (out of 35) ***\" << endl;\nfor (int i = 0; i < count; i++)\n{\ncout << \"Enter Marks obtained by \" << students[i] << \" : \";\ncin >> test3Arr[i];\n}\n\n//Storing total of each student and adding in array\ndouble eachTotalMarksArray[count];\nfor (int i = 0; i < count; i++)\n{\neachTotalMarksArray[i] = test1Arr[i]+test2Arr[i]+test3Arr[i];\n}\n\n//{100,100,80}\n\n//Calculating average total score for whole class\ndouble TotalMarks = 0;\nfor (int i = 0; i < count; i++)\n{\nTotalMarks = TotalMarks + eachTotalMarksArray[i];\n}\ndouble averageTotalMarks = TotalMarks/count;\n\ncout << \"\\nTotal Marks Obtained By Each Students: \" << endl;\nfor(int i = 0; i < count; i++) {\ncout << students[i] << \" : \" << eachTotalMarksArray[i] << endl;\n}\n\ncout << \"\\nAverage Total Score For The Class: \" << averageTotalMarks << endl;\nreturn 0;\n\n}\n\n``````\n\n###### Download Answer and full code of One dimensional array in C and C++ to store name and marks of 3 subject.\n\nhow to download free Car Rental System in Python Django with free Source Code?\n\nGo to freeprojectscodes.com and search for project. You can get Different type of project. which is very useful for collage and school student.\n\nComplete and fully working Student Project With Download Free Source code in freeprojectscode.com\n\nfree Online Examination System In PHP.", null, "" ]
[ null, "https://FREEPROJECTSCODES.COM/wp-content/uploads/2022/01/Screenshot-25.png", null, "https://FREEPROJECTSCODES.COM/wp-content/uploads/2022/01/Screenshot-26.png", null, "https://FREEPROJECTSCODES.COM/wp-content/plugins/chp-ads-block-detector/assets/img/icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7786888,"math_prob":0.79093295,"size":4211,"snap":"2023-40-2023-50","text_gpt3_token_len":1074,"char_repetition_ratio":0.15997148,"word_repetition_ratio":0.21895862,"special_character_ratio":0.31180242,"punctuation_ratio":0.13116883,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99398226,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T00:57:21Z\",\"WARC-Record-ID\":\"<urn:uuid:204056b4-49ae-45a6-be61-5aae7a869340>\",\"Content-Length\":\"221033\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:767dce7e-02e4-4c64-82fe-596315407dd9>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3d95e7a-9a50-4de5-a58d-0e4ca80f0a8f>\",\"WARC-IP-Address\":\"172.67.209.26\",\"WARC-Target-URI\":\"https://freeprojectscodes.com/one-dimensional-array-in-c-and-c/\",\"WARC-Payload-Digest\":\"sha1:MQBJCGH2GAUI5HRGWLZIFQXYEKW52DUA\",\"WARC-Block-Digest\":\"sha1:DYA4U3X74UTLU556MMQXBQOS4IC3TQUR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511284.37_warc_CC-MAIN-20231003224357-20231004014357-00205.warc.gz\"}"}
https://support.zemax.com/hc/en-us/articles/1500005578762-How-to-build-a-spectrometer-theory
[ "# How to build a spectrometer - theory\n\nSpectroscopy is a non-invasive technique and one of the most powerful tools available to study tissues, plasmas and materials. This article describes how to model a lens-grating-lens (LGL) spectrometer using paraxial elements, addressing the design process from the required parameters to the performance evaluation with Advanced OpticStudio features such as Multiple Configurations, Merit Functions and ZPL macros.\n\nAuthored By Lorenz Martin\n\nArticle Attachments\n\n## Video\n\nHow to build a spectrometer -theory - YouTube\n\n## Introduction\n\nOptical spectrometers are instruments to measure the intensity of light as a function of wavelength. There is a variety of generic setups for spectrometers. This article features the line-grating-line (LGL) spectrometer. After setting up the spectrometer in OpticStudio, its critical design parameters are identified and discussed.\n\n## Basic setup of an LGL spectrometer\n\nThe basic setup of an LGL spectrometer is as follows:", null, "The polychromatic light enters the spectrometer through the entrance pinhole resulting in a divergent beam. The collimator lens is then used to generate parallel rays. The following transmission diffraction grating is the core element of the spectrometer. It changes the direction of the light beam as a function of its wavelength (i.e. its colour). The focusing lens, finally, focuses the light beams on the detector. Every wavelength has a different position on the detector, and by measuring the intensity as a function of position on the detector the spectrum of the light is obtained.\n\nAs a first approach, this setup is modelled in OpticStudio using paraxial elements. Doing so allows to ignore aberration and optimization issues, which are discussed in the Knowledgebase article \"How to build a spectrometer – implementation\". On the other hand, our LGL spectrometer is suitable for understanding the basic physical concepts of a spectrometer and its resolution.\n\n## Modelling a paraxial LGL spectrometer in OpticStudio\n\n### System setup\n\nLet’s start with setting the basic parameters of our design in the System Explorer. Set the Entrance Pupil Diameter as follows (we will see later how the aperture affects the performance of the spectrometer):", null, "With our spectrometer, we want to analyse visible light in the range from λmin = 400 nm to λmax = 700 nm wavelength, resulting in a bandwidth of Δλ = 300 nm. Hence, we set three wavelengths, two at the edge of the spectrum and the central wavelength λ0 at 550 nm. The latter will also be the primary wavelength:", null, "### Collimator lens\n\nThis done, we can proceed with the first element in the spectrometer and add the first lines in the lens file. We are assuming that the light originates from a point source (corresponding to a pinhole). Using a paraxial lens with a focal length of 30 mm positioned 30 mm behind the pinhole will produce a collimated beam. A second surface of 30 mm thickness is inserted to account for the distance between collimating lens and diffraction grating:", null, "The 3D Layout of our design will look like this:", null, "### Diffraction grating\n\nThe next element in the spectrometer is the transmission diffraction grating. Let’s have a closer look at the grating before implementing it in OpticStudio, since this is the crucial element of the spectrometer.\n\nThe grating is essentially a stop with several slits arranged in parallel and with equal distances between them. For the sake of simplicity, we first have a look at a grating with only two slits (top view):", null, "The incident beam is collimated, so all the rays in the beam are parallel to each other. If we consider the two rays passing through the two slits (red arrows), we can calculate the path difference, Δs, between these two rays (blue section) as a function of the distance between the two slits, d, the angle of incidence, α, and the diffraction angle, β:", null, "We want this path difference to be one wavelength in order to have constructive interference between the two rays:", null, "The two previous equations enable us to calculate the diffraction angle:", null, "This formula describes how polychromatic light is split into its wavelengths in a spectrometer. As we can see, the diffraction angle only depends on the wavelength (for given α and d).\n\nThe concept of the double slit can then be extended to a grid with many slits, which will concentrate more rays of a specific wavelength in the direction of the diffraction angle and thus enhance the diffraction efficiency.\n\nThere is much more to say about diffraction gratings and their features such as efficiency, blazing angle, etc. This information can be found in the Knowledgebase article \"Simulating diffraction efficiency of surface-relief grating using the RCWA method\". We just keep in mind that a diffraction grating is characterized by its distance between two adjacent slits and that it diverts the collimated light beam as a function of its wavelength.\n\nWhen implementing the refraction grating in the spectrometer, the angle of incidence is typically chosen so that it is equal to the diffraction angle for the central wavelength, i.e.", null, "and using equation 1", null, "In our example we assume d = 0.5 µm and get α = 33.367°. With that in mind, we set up the diffraction grating in OpticStudio. First, we introduce a coordinate break in our lens file and set the Tilt About X to 33.367° in order to tilt the rays by the angle of incidence. The next line to add is the Diffraction Grating. Set the Lines/µm (which is the inverse of d) to 2, and the diffraction order to –1. Another Coordinate Break is needed to account for the diffraction angle. Here, we set a Chief Ray solve for Tilt About X to have the coordinates automatically follow the primary wavelength:", null, "### Focusing lens and detector\n\nThe last group of elements in the spectrometer is the focusing lens and detector. We add four lines to our lens file being space between grating and focusing lens (30 mm), the paraxial focusing lens (focal length ff = 30 mm), space accounting for the focal length and the detector plane, respectively:", null, "Our 3D Layout will now look like this, once you have adjusted the settings as shown here:", null, "One last setting concerns the rays in the 3D Layout marked with the red circle in the previous image where OpticStudio draws too many lines. They can be eliminated by setting the properties of surface 6 in the lens file:", null, "Now we are done with the design of our paraxial LGL spectrometer and we can open a Standard Spot Diagram to view the spot size in the image plane (i.e. on the detector) at the three wavelengths we chose initially:", null, "It is seen that the spot size is infinitesimally small which is only possible because we chose paraxial lenses and used geometric ray tracing. In reality, the spots are larger due to diffractive effects. That’s what we will address in the last part of this article. But first we have a closer look at the focusing lens and at the detector to understand how they must be dimensioned.\n\n## Spectrometer resolution\n\n### Detector width\n\nThe width of the detector is defined through three parameters: The bandwidth of the spectrometer, Δλ = λmax – λmin, the slit distance of the grating, d, and the focal length ff of the focusing lens. Whereas Δλ and d are typically prerequisites, the focusing lens can be chosen to match the geometry of the detector.\n\nTaking the minimum and maximum wavelength of the spectrometer (in our example 400 nm and 700 nm, respectively), we can calculate the minimum and maximum angle of diffraction using equation 1. The result is βmin = 14.48° and βmax = 58.21°, which can be verified in OpticStudio in the Single Ray Trace data, tracing the marginal ray at the minimum and maximum wavelength:", null, "When the rays pass through the focusing lens under minimum and maximum angle, we have the following situation:", null, "Where ff is the focal length of the focusing lens and L the detector width. Consequently, we can calculate the detector width using", null, "In our example we get L = 24.16 mm. This result can again be verified in OpticStudio. A simple and approximative way is to measure it directly with the Measure tool in the 3D Layout:", null, "A more sophisticated and precise way is to use operands. For this purpose, we open the Merit Function Editor, key in the following lines and update the window (red arrow):", null, "With the REAY operand we get the real ray’s y-coordinate, in our example on surface 9 (the detector). We select the values for wave 1 and 3, which correspond to 400 nm and 700 nm wavelength, respectively. The DIFF operand is used to calculate the difference between the two y-coordinates. The resulting value is now exactly what we calculated analytically before.\n\nLet’s recall the essential outcome of the previous considerations: Once the bandwidth of a spectrometer is defined, the diffraction grating yields the minimum and maximum refraction angle (equation 1). The minimum and maximum diffraction angle, in turn, with the focal length of the focusing lens ff defines the detector width (equation 2). Large detectors call for a large ff and vice versa.\n\n### Remapping of the wavelengths on the detector\n\nWhen we have a look at the Spot Diagram, we notice that the spots of the three wavelengths are not uniformly distributed on the detector surface, even though being uniformly distributed in the wavelength range. This effect comes from the sinus in equation 1 and must be accounted for in spectrometers by remapping the position on the detector to the corresponding wavelength.\n\nWe can calculate the mapping function (being the inverse of the remapping function) in OpticStudio by sweeping through the wavelengths of the spectrometer bandwidth and record the position of the ray on the detector. An efficient way to do so is to use a Zemax Programming Language (ZPL) Macro. Download the attached macro Mapping_Function_Resolution.ZPL and save it in the folder Zemax\\Macros. Open it and have a look at the structure. The macro first gets the system wavelengths (operand WAVL) and then computes the y-coordinate of the ray on the detector (operand RAYY) while looping through the wavelengths using multiple configurations. The resulting plot after execution shows the mapping function:", null, "### Spectral resolution\n\nThe macro Mapping_Function_Resolution.ZPL produces a second plot showing the spectral resolution of the spectrometer R, i.e. the fraction of bandwidth, δλ, per unit width of the detector, ΔL:", null, "The spectral resolution as it is defined here is the inverse of the derivative of the mapping function. For this reason, it is computed in the same macro:", null, "The lower the spectral resolution, the less bandwidth we have per unit width of the detector. Multiplying the spectral resolution with the pixel width of the detector finally yields a measure for the spectrometer resolution, being an important characteristic value of every spectrometer.\n\nWe could enhance the spectral resolution of the spectrometer by selecting a larger focal length for the focusing lens and thus spreading the spectrum over a larger detector width, according to equation 2. However, this strategy wouldn’t work out. We must also consider that the spot size on the detector is limited by diffraction, introducing new constraints for spectrometer design.\n\n### Diffraction limit\n\nA spectrometer can be considered as an optical system mapping an object (the entrance pinhole, i.e. a point source) to the image plane (the detector). Using rays to calculate the propagation of light through the optical system as OpticStudio does is very efficient. But the result we get with ray tracing does not fully correspond to reality. Instead of an infinitesimally small point (corresponding to a sharp image) the image of the point source will be blurred. This effect is due to diffraction and limits the resolution of all optical systems. The way an optical system like the spectrometer maps a point source into the blurred image is referred to as point spread function.\n\nOpticStudio has a variety of tools to take diffraction into account. Here we consider the Airy disk (being the diffraction limited spot size) in the Spot Diagram where it is plotted along with its numerical value in the plot comments:", null, "The Airy disk is also used for the Rayleigh criterion. The Rayleigh criterion states that the images of two-point sources can be discriminated as soon as the distance between them is larger than the radius of their Airy disk. In a spectrometer, the distance between two point sources corresponds to the fraction of bandwidth, δλ, as introduced in the previous section.\n\nThe Rayleigh criterion has a direct impact on the choice of the pixel size of the detector. It is useless to have pixels smaller than half the Airy disk radius since they would oversample the diffraction-limited resolution of the spectrometer.\n\nThe formula to calculate the radius of the Airy disk is", null, "where F# is the working f-number, which corresponds to the focal length of the focusing lens, ff, divided by the system aperture. The consequences of this relation are the following:\n\n1. The diffraction-limited resolution of the spectrometer varies with wavelength. This effect cannot be eliminated with the optical design.\n2. Choosing a large focal length for the focusing lens, ff, will increase the f-number which, in turn, increases the size of the Airy disk. This effect goes hand in hand with the detector width L as discussed in the previous section (equation 2): The detector width will increase as well. In the end, we only get larger Airy disks on a larger detector and do not enhance the spectrometer resolution.\n3. Choosing a large system aperture will decrease the f-number which reduces the size of the Airy disk.\n\n## Choice of system parameters\n\nAssuming that the bandwidth and the grating of our spectrometer are pre-set, we have two parameters we can tune to get the most out of our spectrometer:\n\n### System aperture\n\nThe system aperture has a direct impact on the size of the Airy disc, i.e. the diffraction-limited resolution of our spectrometer (equation 3). It is a good strategy to choose the aperture as large as possible since this yields small Airy disks.\n\n### Focusing lens\n\nThe choice of the focal length of the focusing lens, ff, is more delicate. The most important is to illuminate the detector entirely (equation 2). If the detector is small, also ff is small and we get a more compact spectrometer. On the other hand, small focal lengths entail more aberrations. Consequently, the detector should be chosen as large as possible. The diffraction-limited resolution of the spectrometer is not affected by the focusing lens, since the size of the airy disk scales with the detector width." ]
[ null, "https://support.zemax.com/hc/article_attachments/1500007670061/KA-01950_1_Layout1.png", null, "https://support.zemax.com/hc/article_attachments/1500007670081/KA-01950_2_SysExp1.png", null, "https://support.zemax.com/hc/article_attachments/1500007670101/KA-01950_3_Wave1.png", null, "https://support.zemax.com/hc/article_attachments/1500007670121/KA-01950_4_LDE1.png", null, "https://support.zemax.com/hc/article_attachments/1500007670141/KA-01950_5_Layout2.png", null, "https://support.zemax.com/hc/article_attachments/1500007505202/KA-01950_6_Layout3.png", null, "https://support.zemax.com/hc/article_attachments/1500007505302/KA-01950_7_Eqn1.png", null, "https://support.zemax.com/hc/article_attachments/1500007670161/KA-01950_8_Eqn2.png", null, "https://support.zemax.com/hc/article_attachments/1500007670321/KA-01950_9_Eqn3_1_.png", null, "https://support.zemax.com/hc/article_attachments/1500007670341/KA-01950_10_Eqn4.png", null, "https://support.zemax.com/hc/article_attachments/1500007670361/KA-01950_11_Eqn5.png", null, "https://support.zemax.com/hc/article_attachments/1500007670181/KA-01950_12_LDE2.png", null, "https://support.zemax.com/hc/article_attachments/1500007505222/KA-01950_13_LDE3.png", null, "https://support.zemax.com/hc/article_attachments/1500007505242/KA-01950_14_Layout4.png", null, "https://support.zemax.com/hc/article_attachments/1500007670201/KA-01950_15_SurfProp1.png", null, "https://support.zemax.com/hc/article_attachments/1500007505262/KA-01950_16_Spot1.png", null, "https://support.zemax.com/hc/article_attachments/1500007670221/KA-01950_17_SRT1.png", null, "https://support.zemax.com/hc/article_attachments/1500007670241/KA-01950_18_Layout5.png", null, "https://support.zemax.com/hc/article_attachments/1500007505282/KA-01950_19_Eqn6_2_.png", null, "https://support.zemax.com/hc/article_attachments/1500007670261/KA-01950_20_Layout6.png", null, "https://support.zemax.com/hc/article_attachments/1500007670281/KA-01950_21_MFE1.png", null, "https://support.zemax.com/hc/article_attachments/1500007670301/KA-01950_22_Plot1.png", null, "https://support.zemax.com/hc/article_attachments/1500007667641/KA-01950_23_Eqn7.png", null, "https://support.zemax.com/hc/article_attachments/1500007667661/KA-01950_24_Plot2.png", null, "https://support.zemax.com/hc/article_attachments/1500007502622/KA-01950_25_Spot2.png", null, "https://support.zemax.com/hc/article_attachments/1500007502642/KA-01950_26_Eqn8_3_.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8985877,"math_prob":0.9781767,"size":14845,"snap":"2022-27-2022-33","text_gpt3_token_len":3142,"char_repetition_ratio":0.18583654,"word_repetition_ratio":0.024259869,"special_character_ratio":0.1954867,"punctuation_ratio":0.09184423,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98350245,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T06:04:58Z\",\"WARC-Record-ID\":\"<urn:uuid:11257954-5e01-4c92-89ea-fb9e3dae18d2>\",\"Content-Length\":\"62833\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b79d6a79-1c33-4b42-9b42-ebb10a619897>\",\"WARC-Concurrent-To\":\"<urn:uuid:67a75553-4a2b-4d8e-ac8a-5685096ec075>\",\"WARC-IP-Address\":\"104.18.249.37\",\"WARC-Target-URI\":\"https://support.zemax.com/hc/en-us/articles/1500005578762-How-to-build-a-spectrometer-theory\",\"WARC-Payload-Digest\":\"sha1:OZZ6WQ2Q64BT6PMWGYEFS67I34L2NAHF\",\"WARC-Block-Digest\":\"sha1:W3AQYIAA5E7ZNXLTVYUHRVZR4IW3FXCJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103624904.34_warc_CC-MAIN-20220629054527-20220629084527-00769.warc.gz\"}"}
https://www.ms.u-tokyo.ac.jp/journal/number_e/jms1504_e.html
[ "## Vol. 15 (2008) No. 04", null, "Journal of Mathematical Sciences The University of Tokyo\n\n1. BENMERIEM, Khaled; BOUZAR, Chikh\nUltraregular generalized functions of Colombeau type\nVol. 15 (2008), No. 4, Page 427--447.\n\n2. Prokhorov, Yuri\nGap conjecture for $3$-dimensional canonical thresholds\nVol. 15 (2008), No. 4, Page 449--459.\n\n3. Sato, Yoshihisa\n2-spheres of square $-1$ and the geography of genus-2 Lefschetz fibrations\nVol. 15 (2008), No. 4, Page 461--491.\nQuantization of differential systems with the affine Weyl group symmetries of type $C_N^{(1)}$\nExact power series in the asymptotic expansion of the matrix coefficients with the corner $K$-type of $P_J$-principal series representations of $Sp(2,\\R)$" ]
[ null, "https://www.ms.u-tokyo.ac.jp/journal//jms_cover-s.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6636046,"math_prob":0.9258843,"size":1326,"snap":"2021-43-2021-49","text_gpt3_token_len":439,"char_repetition_ratio":0.204236,"word_repetition_ratio":0.27173913,"special_character_ratio":0.34087482,"punctuation_ratio":0.17372881,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9779466,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T04:08:59Z\",\"WARC-Record-ID\":\"<urn:uuid:0edc544e-89f5-4eed-8adb-0bbf25bd5a50>\",\"Content-Length\":\"27684\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37bf94cf-0588-432d-9fcb-31f6a4dd1c08>\",\"WARC-Concurrent-To\":\"<urn:uuid:af7bf3d9-9eab-4914-868c-520d0efbe15d>\",\"WARC-IP-Address\":\"157.82.16.27\",\"WARC-Target-URI\":\"https://www.ms.u-tokyo.ac.jp/journal/number_e/jms1504_e.html\",\"WARC-Payload-Digest\":\"sha1:5NB5VFIOWTT5BHXPEQUACJF36O2CWJUF\",\"WARC-Block-Digest\":\"sha1:CDVPGGONZDWLXBIODDOYUNVTK3R625GB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362930.53_warc_CC-MAIN-20211204033320-20211204063320-00613.warc.gz\"}"}
http://jasabroadcast.xyz/kindergarten-math-counting-objects-worksheet-free
[ "# Kindergarten Math Counting Objects Worksheet Free\n\nPosted on February 27, 2017 by DonyaLicata\n\nKindergarten Numbers & Counting Worksheets - k5learning. Kindergarten Math Counting Objects Worksheet Free com Practice using 1st, 2nd, third, fourth and so on with our ordinal numbers worksheets. More than - less than. Compare groups of objects and numbers with our more than-less than worksheets. Simple Math. Add, subtract, recognize patters, measure things and count coins with our kindergarten math worksheets. Kindergarten Math Counting Objects Worksheet - Image Results More Kindergarten Math Counting Objects Worksheet images.", null, "Source: www.math-salamanders.com\n\nKindergarten Numbers & Counting Worksheets - k5learning.com Practice using 1st, 2nd, third, fourth and so on with our ordinal numbers worksheets. More than - less than. Compare groups of objects and numbers with our more than-less than worksheets. Simple Math. Add, subtract, recognize patters, measure things and count coins with our kindergarten math worksheets. Kindergarten Math Counting Objects Worksheet - Image Results More Kindergarten Math Counting Objects Worksheet images.\n\nCounting Objects | Preschool and Kindergarten Math Below, you will find a wide range of our printable worksheets in chapter Counting Objects of section Numbers 0 to 30.These worksheets are appropriate for Preschool and Kindergarten Math.We have crafted many worksheets covering various aspects of this topic, and many more. Math Counting Objects worksheets Kindergarten - edubuzzkids Fun learning online worksheets for Kindergarten, online math printable worksheets. HOME > KINDERGARTEN WORKSHEETS > KINDERGARTEN MATH WORKSHEETS > COUNTING > COUNTING OBJECTS. english ... Counting Objects Kindergarten. Count how many pre-k. Count and write Kindergarten. Count how many Kindergarten. Count tens & ones grade-1.\n\n13 Images of Counting Objects Kindergarten Math Worksheets See 13 Best Images of Counting Objects Kindergarten Math Worksheets. Inspiring Counting Objects Kindergarten Math Worksheets worksheet images. Count Objects and Write Number Worksheet Counting and Color Worksheet 1 10 Valentine's Day Math Worksheets Kindergarten Matching Number 1-20 Worksheets Writing Numbers as Words Worksheets. Count Objects Worksheets Count Objects . Pre school and kindergarten math worksheets for counting. Counting objects. Learning to count worksheets. Kindergarten math.\n\nCounting to 20 worksheets | K5 Learning Count to 20 worksheets. Students count the number of pictured objects (0-20) and write the number down. Straight forward practice in counting and writing numbers up to 20. Free preschool and kindergarten worksheets from K5 Learning.\n\nGallery of Kindergarten Math Counting Objects Worksheet Free" ]
[ null, "https://i0.wp.com/math-salamanders.com/image-files/printable-kindergarten-math-worksheets-counting-to-25-2.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80724466,"math_prob":0.57064736,"size":2642,"snap":"2019-26-2019-30","text_gpt3_token_len":529,"char_repetition_ratio":0.2767248,"word_repetition_ratio":0.35422343,"special_character_ratio":0.18319455,"punctuation_ratio":0.13507108,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9754138,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-15T23:02:49Z\",\"WARC-Record-ID\":\"<urn:uuid:6f99d071-f82c-4394-8570-b013461bfdde>\",\"Content-Length\":\"29712\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:128a1963-f51c-4d61-b60a-5e7118e9786a>\",\"WARC-Concurrent-To\":\"<urn:uuid:0554344c-8941-4334-a3ae-2e4f7711a983>\",\"WARC-IP-Address\":\"104.27.181.155\",\"WARC-Target-URI\":\"http://jasabroadcast.xyz/kindergarten-math-counting-objects-worksheet-free\",\"WARC-Payload-Digest\":\"sha1:73H2M3PAMQ5V6NXPF2Y74CPZYIXYH6ZM\",\"WARC-Block-Digest\":\"sha1:Y253HX73KOYBPP5B3CXAFGLQ53YONLA2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524254.28_warc_CC-MAIN-20190715215144-20190716001144-00343.warc.gz\"}"}
https://www.vonksieraden.nl/ball-mill/22417.html
[ "Online Service\n1. Home\n2. > Bond Equation For Recirculating Load On A Ball Mill\n\nBond Equation For Recirculating Load On A Ball Mill\n\n•", null, "Formula For Recirculating Load Of Cyclones And Mill\n\n•", null, "How To Calculate Circulating Load In Grinding Mill\n\nBall Mill Circulating Load Formula Ball Mill- Indutri. Calculate Sag Ball Mill Circulating Load Circulating Load Ratio Ball Mill Yukonjacksgrill The Bond Equation Is Used To Calculate Ball Mill Specific Energy From T To The Final Very Low Circulating Loads The Transfer Size Is A Function Of Mill Performancemultiplied By Adjustment Factors.Iterative algorithm for closed circuit circulation load calculation of recirculating load in ball mill circuit Circulating Load Formula Of Ball Mill Circulating Load Calculation In Closed Circuit Ball Mill calculating re Ball mill grinding circulating load ball mill .\n\n•", null, "Circulating Load In A Ball Mill\n\nRecirculating Load In Ball Milling Formula. Ball mill recirculating load calculation pdf circulating load slideshare bond equation for recirculating load on a ball mill calculaton of circulating load in cement ball mill here is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as .Calculate Sag Ball Mill Circulating Load. Very low circulating loads the transfer size is a function of mill performance circulating load calculation formula 911 metallurgist sep 10 2017 here is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a grinding circuit.\n\n•", null, "How To Calculate Circulating Load Binq Mining\n\nApril 23rd, 2019 - circulating load calculation in ball mill Circulating Load Calculation FormulaHere is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a grinding circuit For example your ball mill is in closed circuit with a set of cyclonescirculating load.Circulating Load Calculation Formula. 2018-7-27 Here is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a grinding circuit. bond equation for recirculating load on a ball mill. Bond, it was established that the optimum circulating load for a closed ball of classification.\n\n•", null, "Formula For Load Calculation Of Rolling Mill\n\nCalculating Load On A Ball Mill. Calculating load on a ball mill pdf circulating load calculation in grinding circuits circulating load calculation in grinding circuits 5 acknow ledges the authors than k nanc ial support f rom the brazil ian agencies c npq c ape s fapeg a nd fu nape 6. Calculation Of A Ball Mill Load Versus Level.Circulating load calculation in closed circuit ball mill. s. circulating load calculation made in a mill –.this plant includes one SAG mill in a closed circuit with a vibrating screen and one ball mill with a size classification . necessary to calculate the proportion of each mill's energy . of ball mill's circulating load was obtained, 149 , which is.\n\n•", null, "Calculation Of Circulating Load Of A Grinding Mill\n\n•", null, "How To Caluclate Recirculating Lod Of Ball Mill\n\nBond equation for recirculating load on a ball mill. Bond Ball Mill Index JKTech. A Bond Ball Mill Index Test (BBMWI) is a standard test for determining the Ball Mill Work Index In designing and optimising a milling circuit using the BBMWI, the following equations are used (Bond, 1961) produce a 250 circulating load.Ball mill circulating load gmecrusher.com. ball mill recirculating load for grinding fundamentals. both the density and flow of the cyclone feed stream are Circulating Load Formula In Ball Mill.\n\n•", null, "Ball Mill Circulating Load From Screen Anlysis\n\nBond equation for recirculating load on a ball mill Recirculating load = 100xball mill mill grinding circuit operating at a 250 circulating load. The empirical formula to calculate 11 Crushing and.Ball mill circulating load formula - MTM Crusher. balls and Cylpebs. ducted in a standard Bond ball mill loaded with various grinding media to Standard Bond balls charge 1.18mm limiting screen 20 circulating load. Here!.\n\n•", null, "On Literature On Recirculating Load Of Ball Mill\n\nHow to calculate circulating load in grinding mill. Ball Mill Circulating Load. Ball Mill Instruction Manual (PDF) - BICO Inc The FC Bond Ball Mill is a small universal laboratory mill used in calculating the , ofminus 6 mesh, stage crushed, dry feed was used and the circulation load.Calculate Sag Wet Ball Mill Circulating Load. Ball Mill Recirculating Load Calculation. The bond equation is used to calculate ball mill specific energy from t80 to the very low circulating loads the transfer size is a function of mill performance calculation of the required semiautogenous mill power based on the primary wet autogenous mill wam and a series of ball mills wam is customary.\n\n•", null, "Calculaton Of Circulating Load In Cement Ball Mill\n\nCrushing Equipment Ball Mill Circulating Load Calculation . BICO Inc. The F.C. Bond Ball Mill is a . bond equation for recirculating load on a ball mill. circulating load formula crusher BINQ Mining. volume loading charge calculation for cement mill RB Crusher Mill formula to calculate ball mill volume loading Gold Ore Crusher.Circulating Load Formula In Ball Mill. Circulating Load Formula In Ball Mill. Production capacity 0.65-615t h . Feeding Size ≤25mm . Discharging Size 0.075-0.89mm. Ball mill is also known as ball grinding mill. Ball mill is the key equipment for recrushing after the crushing of the materials.\n\n•", null, "Ball Mill Recirculating Load Calculation Pdf\n\nFormula For Recirculating Load Of Cyclones And Mill. Circulating load formula in ball mill klabrickellparents com015 08 10 calculation of circulating load with in ball mill circulating load calculation formulahere is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a grinding.Circulating load of dynamic separator in cement ball mill calculation photo of cement mill (mill trunnion) load calculation of grinding media in cement mill Sitemap pre Stone Crushing Industry In Uttarakhand next Youtube Manganese Ore To Concentrate.\n\n•", null, "Aug 03, 2021 Ball Mill Recirculating Load Calculation Pdf. Circulating load calculation in ball millirculating load calculation formulahere is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a grinding circuit for example your ball mill is in closed circuit with a set of cyclones -circulating load calculation in ball mill- grinding mill.Bond Ball Mill Index - JKTech. A Bond Ball Mill Index Test (BBMWI) is a standard test for determining the Ball Mill Work Index produce a 250 circulating load. the Bond equation (1961).\n\n•", null, "Calculating Re Circulating Load In Closed Circuit Ball Mill\n\nBall Mill Circulating Load. Circulating load calculation formula019-8-15 here is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a grinding circuitor example your ball mill is in closed circuit with a set of cycloneshe grinding mill receives crushed ore feed. .Ball mill media load calculation formula.Ball mill instruction manual pdf bico inc.The f.C.Bond ball mill is a small universal laboratory mill used in calculating the grams.700 cc ofminus 6 mesh, stage crushed, dry feed was used and the circulation load.Unconnectcd media such as. Get Latest Price.\n\n•", null, "Circulating Circulating Load Of A Ball Mill\n\nVRM and ball mill circulating load Page 1 of 1. Sep 07, 2011 Re VRM and ball mill circulating load. Mainly in USA,the term circulating load is often used than the circulation load is percentage of coarse return in relation to fines it can be calculated by Coarse return TPH X 100 Mill output range of cirulating load in a conventional.Ball Mill Circulating Load. Calculating a grinding circuit’s circulating loads based on Screen Analysis of its slurries Compared to Solids or Density based Circulating load equations, a precise method of determining grinding circuit tonnages uses the screen size distributions of the pulps instead of the dilution ratios Pulp samples collected around the ball mill or rod mill and.\n\n•", null, "Literature On Recirculating Load Of Ball Mill\n\nHow To Calculate Circulating Load In Ball Milling. Calculaton Of Circulating Load In Cement Ball Mill. Here is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a grinding circuit.For example your ball mill is in closed circuit with a set of cyclones.The grinding mill receives crushed ore feed.The pulp densities around your cyclone.Undersize product and circulating load are screen analyzed, and the average of the last three net grams per revolution (Gbp) is the ball mill grindability. The ball mill work index, Wi (kWh short ton) is calculated from the following equation (Bond 1961) 1." ]
[ null, "https://www.vonksieraden.nl/categorypic/ball-mill/23.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/24.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/25.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/26.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/27.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/28.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/29.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/30.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/31.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/32.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/33.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/34.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/35.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/36.jpg", null, "https://www.vonksieraden.nl/categorypic/ball-mill/37.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8115292,"math_prob":0.9859478,"size":9462,"snap":"2022-05-2022-21","text_gpt3_token_len":1993,"char_repetition_ratio":0.31592304,"word_repetition_ratio":0.29966998,"special_character_ratio":0.19277108,"punctuation_ratio":0.07655502,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99753183,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-19T16:24:09Z\",\"WARC-Record-ID\":\"<urn:uuid:1ba55cf6-a29e-4d51-8c05-e27d75e34937>\",\"Content-Length\":\"21565\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d71a2ade-d706-4cfe-87f2-2fe619627fe0>\",\"WARC-Concurrent-To\":\"<urn:uuid:d28133ee-afed-4d7a-9d35-fbc87ee089dd>\",\"WARC-IP-Address\":\"104.21.39.230\",\"WARC-Target-URI\":\"https://www.vonksieraden.nl/ball-mill/22417.html\",\"WARC-Payload-Digest\":\"sha1:LNR4BBIT4QEJB2MNN6YVA2BDW6QLXQZF\",\"WARC-Block-Digest\":\"sha1:6XCGEFO4X22G65GC23JGJNBNHXKLVYJP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301475.82_warc_CC-MAIN-20220119155216-20220119185216-00435.warc.gz\"}"}
http://samm.univ-paris1.fr/Lasso-and-Group-Lasso-for-logistic
[ "# Lasso and Group Lasso for logistic regression model\n\nMarius Kwemou (Laboratoire Statistique et Génome, Evry)\nvendredi 16 novembre 2012\n\nRésumé : We consider the problem of estimating a function", null, "in\nlogistic regression model. We propose to estimate this function", null, "by a sparse approximation build as a linear combination of\nelements of a given dictionary of", null, "functions. This sparse\napproximation is selected by the Lasso or Group Lasso procedure. In\nthis context, we state non asymptotic oracle inequalities for Lasso\nand Group Lasso under restricted eigenvalues assumption as introduced\nin Bickel et al. (2009). Those theoretical results are illustrated\nthrough a simulation study.\n\nCet exposé se tiendra en salle C20-13, 20ème étage, Université\nParis 1, Centre Pierre Mendès-France, 90 rue de Tolbiac, 75013 Paris" ]
[ null, "http://samm.univ-paris1.fr/local/cache-TeX/d43e51bee35b78083e05bcfc2119b318.png", null, "http://samm.univ-paris1.fr/local/cache-TeX/d43e51bee35b78083e05bcfc2119b318.png", null, "http://samm.univ-paris1.fr/local/cache-TeX/83878c91171338902e0fe0fb97a8c47a.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53251165,"math_prob":0.80435926,"size":829,"snap":"2020-24-2020-29","text_gpt3_token_len":206,"char_repetition_ratio":0.10060606,"word_repetition_ratio":0.0,"special_character_ratio":0.20868516,"punctuation_ratio":0.112676054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9777099,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,10,null,10,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-15T00:05:37Z\",\"WARC-Record-ID\":\"<urn:uuid:2fc13e1a-6c9f-40f7-80a4-372a9c242117>\",\"Content-Length\":\"20501\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d327307-2e8a-4580-a72f-146fb9dffbb7>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1368c3f-c6c0-416b-a34f-bae8b17b0dd3>\",\"WARC-IP-Address\":\"194.214.26.146\",\"WARC-Target-URI\":\"http://samm.univ-paris1.fr/Lasso-and-Group-Lasso-for-logistic\",\"WARC-Payload-Digest\":\"sha1:LS32M3RNVWKYFQD5I5XMRRJXSEJPJZM7\",\"WARC-Block-Digest\":\"sha1:RNEBLM2JM5S3ZJRTYYGGOFE25WDMKHFO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657151761.87_warc_CC-MAIN-20200714212401-20200715002401-00345.warc.gz\"}"}
https://groupprops.subwiki.org/wiki/Endo-invariance_implies_join-closed
[ "# Endo-invariance implies strongly join-closed\n\nJump to: navigation, search\nThis article gives the statement and possibly, proof, of an implication relation between two subgroup metaproperties. That is, it states that every subgroup satisfying the first subgroup metaproperty (i.e., Endo-invariance property (?)) must also satisfy the second subgroup metaproperty (i.e., Strongly join-closed subgroup property (?))\nView all subgroup metaproperty implications | View all subgroup metaproperty non-implications\n\n## Statement\n\n### Verbal statement\n\nAny subgroup property that arises as an invariance property with respect to endomorphisms in the function restriction formalism is strongly join-closed, viz it is both join-closed and trivially true.\n\n### Symbolic statement\n\nLet", null, "$p$ be an endomorphism property. Let", null, "$I$ be a (possibly empty) indexing set. Let", null, "$H_i$ is a family of subgroups of", null, "$G$ indexed by", null, "$I$. Assume that for every function", null, "$f$ on", null, "$G$ satisfying", null, "$p$,", null, "$f(H_i)$", null, "$H_i$ (viz", null, "$H_i$ satisfies the invariance property for", null, "$p$).\n\nThen, if", null, "$H$ denotes the join of (viz, subgroup generated by) all", null, "$H_i$s,", null, "$H$ also satisfies the invariance property for", null, "$p$. In other words, whenever", null, "$f$ is a function on", null, "$G$ satisfying", null, "$p$,", null, "$f(H)$", null, "$H$.\n\n## Definitions used\n\n### Invariance property\n\nPLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE]\n\n### Strongly join-closed subgroup property\n\nA subgroup property is termed strongly join-closed if given any family of subgroups having the property, their join (viz the subgroup generated by them) also has the property. Note that just saying that a subgroup property is join-closed simply means that given any nonempty family of subgroups with the property, the join also has the property.\n\nThus, the property of being strongly intersection-closed is the conjunction of the properties of being intersection-closed and trivially true, viz satisfied by the trivial subgroup.\n\n## Facts used\n\n1. Homomorphisms commute with joins: This states that if", null, "$\\sigma:G \\to K$ is a homomorphism, and", null, "$H_i, i \\in I$ is a collection of subgroups of", null, "$G$ whose join is", null, "$H$, then", null, "$\\sigma(H)$ is the join of", null, "$\\sigma(H_i), i \\in I$.\n\n## Proof\n\nGiven: A property", null, "$p$ of endomorphisms, a group", null, "$G$ with subgroups", null, "$H_i, i \\in I$ whose join is a subgroup", null, "$H$ of", null, "$G$. Further, each", null, "$H_i$ is invariant under each endomorphism", null, "$\\sigma$ of", null, "$G$ that satisfies", null, "$p$.\n\nTo prove:", null, "$H$ is invariant under each endomorphism", null, "$\\sigma$ of", null, "$G$ that satisfies property", null, "$p$.\n\nProof: We pick any endomorphism", null, "$\\sigma$ of", null, "$G$ that satisfies", null, "$p$.\n\n1. For each", null, "$i \\in I$,", null, "$\\sigma(H_i)$ is contained in", null, "$H_i$: This follows from the assumption on the", null, "$H_i$s.\n2.", null, "$\\sigma(H)$ is the join of the", null, "$\\sigma(H_i)$s: This follows from fact (1), with", null, "$K = G$.\n3.", null, "$\\sigma(H)$ is contained in", null, "$H$: Since each", null, "$\\sigma(H_i)$ is contained in", null, "$H_i$, it is in particular contained in", null, "$H$, hence their join, which is", null, "$\\sigma(H)$, is also contained in", null, "$H$.\n\nThis completes the proof." ]
[ null, "https://groupprops.subwiki.org/w/images/math/8/3/8/83878c91171338902e0fe0fb97a8c47a.png ", null, "https://groupprops.subwiki.org/w/images/math/d/d/7/dd7536794b63bf90eccfd37f9b147d7f.png ", null, "https://groupprops.subwiki.org/w/images/math/4/5/d/45d4bf87d928aada05c4b8df8e5f057e.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/d/d/7/dd7536794b63bf90eccfd37f9b147d7f.png ", null, "https://groupprops.subwiki.org/w/images/math/8/f/a/8fa14cdd754f91cc6554c9e71929cce7.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/8/3/8/83878c91171338902e0fe0fb97a8c47a.png ", null, "https://groupprops.subwiki.org/w/images/math/9/f/5/9f5dafccec123d5b374c0d321df7de45.png ", null, "https://groupprops.subwiki.org/w/images/math/4/5/d/45d4bf87d928aada05c4b8df8e5f057e.png ", null, "https://groupprops.subwiki.org/w/images/math/4/5/d/45d4bf87d928aada05c4b8df8e5f057e.png ", null, "https://groupprops.subwiki.org/w/images/math/8/3/8/83878c91171338902e0fe0fb97a8c47a.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/4/5/d/45d4bf87d928aada05c4b8df8e5f057e.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/8/3/8/83878c91171338902e0fe0fb97a8c47a.png ", null, "https://groupprops.subwiki.org/w/images/math/8/f/a/8fa14cdd754f91cc6554c9e71929cce7.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/8/3/8/83878c91171338902e0fe0fb97a8c47a.png ", null, "https://groupprops.subwiki.org/w/images/math/e/6/6/e664efe3aa659d458e515f5a18bec7c9.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/5/8/9/589fc60604e7c291bc78d3c360b27d4a.png ", null, "https://groupprops.subwiki.org/w/images/math/7/d/2/7d284c07464957774625519cbe575ac8.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/3/c/a/3ca1b5ac1b65fc18f17b35b4a1877fa2.png ", null, "https://groupprops.subwiki.org/w/images/math/8/3/1/831b5ce7106cb8f9a44c94a83e01a3ae.png ", null, "https://groupprops.subwiki.org/w/images/math/8/3/8/83878c91171338902e0fe0fb97a8c47a.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/7/d/2/7d284c07464957774625519cbe575ac8.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/4/5/d/45d4bf87d928aada05c4b8df8e5f057e.png ", null, "https://groupprops.subwiki.org/w/images/math/9/d/4/9d43cb8bbcb702e9d5943de477f099e2.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/8/3/8/83878c91171338902e0fe0fb97a8c47a.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/9/d/4/9d43cb8bbcb702e9d5943de477f099e2.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/8/3/8/83878c91171338902e0fe0fb97a8c47a.png ", null, "https://groupprops.subwiki.org/w/images/math/9/d/4/9d43cb8bbcb702e9d5943de477f099e2.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/8/3/8/83878c91171338902e0fe0fb97a8c47a.png ", null, "https://groupprops.subwiki.org/w/images/math/4/3/0/430e7de2484db3b5a632379437c6cef7.png ", null, "https://groupprops.subwiki.org/w/images/math/7/2/a/72a44018c5a6a427c668c8ae9522f00d.png ", null, "https://groupprops.subwiki.org/w/images/math/4/5/d/45d4bf87d928aada05c4b8df8e5f057e.png ", null, "https://groupprops.subwiki.org/w/images/math/4/5/d/45d4bf87d928aada05c4b8df8e5f057e.png ", null, "https://groupprops.subwiki.org/w/images/math/3/c/a/3ca1b5ac1b65fc18f17b35b4a1877fa2.png ", null, "https://groupprops.subwiki.org/w/images/math/7/2/a/72a44018c5a6a427c668c8ae9522f00d.png ", null, "https://groupprops.subwiki.org/w/images/math/8/1/c/81c6e68c8c8768e18fb13db459cc9963.png ", null, "https://groupprops.subwiki.org/w/images/math/3/c/a/3ca1b5ac1b65fc18f17b35b4a1877fa2.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/7/2/a/72a44018c5a6a427c668c8ae9522f00d.png ", null, "https://groupprops.subwiki.org/w/images/math/4/5/d/45d4bf87d928aada05c4b8df8e5f057e.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/3/c/a/3ca1b5ac1b65fc18f17b35b4a1877fa2.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9059586,"math_prob":0.9974021,"size":2953,"snap":"2019-35-2019-39","text_gpt3_token_len":634,"char_repetition_ratio":0.20617159,"word_repetition_ratio":0.042826552,"special_character_ratio":0.19505587,"punctuation_ratio":0.12667947,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999677,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,5,null,null,null,null,null,null,null,null,null,5,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-22T17:39:43Z\",\"WARC-Record-ID\":\"<urn:uuid:1176c017-e2d6-4199-a98b-9fd85e479b95>\",\"Content-Length\":\"36815\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7343e814-d957-4f81-a148-86d7cd141957>\",\"WARC-Concurrent-To\":\"<urn:uuid:b7d257f9-1f9f-4172-897e-8a3813b38d75>\",\"WARC-IP-Address\":\"96.126.114.7\",\"WARC-Target-URI\":\"https://groupprops.subwiki.org/wiki/Endo-invariance_implies_join-closed\",\"WARC-Payload-Digest\":\"sha1:HWKABOOWP57ARQBT332KAJPEACBIFXOV\",\"WARC-Block-Digest\":\"sha1:SOHA72T2O2GPKKFWWG33PG4FBVG2U3WT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027317339.12_warc_CC-MAIN-20190822172901-20190822194901-00190.warc.gz\"}"}
https://wmbriggs.com/post/23203/
[ "Class - Applied Statistics\n\n# Free Data Science Class: Predictive Case Study 1, Part III\n\nYou must review: Part I, II. Not reviewing is like coming to class late and saying “What did I miss?” Note the New & Improved title!\n\nHere are the main points thus far: All probability is conditional on the assumptions made; not all probability is quantifiable or must involve observables; all analysis must revolve on ultimate decisions; unless deduced, all models (AI, ML, statistics) are ad hoc; all continuum-based models are approximations; and the Deadly Sin of Reification lurks.\n\nWe are using the data from Uncertanity, so that those bright souls who own the book can follow along. We are interested in predicting the college grade point of certain individuals at the end of their first year. We spent two sessions defining what we mean by this. We spend more time now on this most crucial question.\n\nThis is part of the process most neglected in the headlong rush to get to the computer, a neglect responsible for vast over-certainties.\n\nNow we learned that CGPA is a finite-precision number, a number that belongs to an identifiable set, such as 0, 0.0625, and so on, and we know this because we know the scoring system of grades and we know the possible numbers of classes taken. The finite precision of CGPA can be annoyingly precise. Last time we were out at six or eight decimal places, precision far beyond any decision (except ranking) I can think to make.\n\nTo concentrate on this decision I put myself in the mind of a Dean—and immediately began to wonder why all my professors aren’t producing overhead. Laying that aside (but still sharpening my ax) I want to predict the chance any given student will have a CGPA of 0, 1, 2, 3, or 4. These buckets are all I need for the decision at hand. Later, we’ll increase the precision.\n\nKnowing nothing except the grade must be one of these 5 numbers, the probability of a 4 is 1/5. This is the model:\n\n(1) Pr(CGPA = 4 | grading rules),\n\nwhere “grading rules” is a proposition defining how CGPAs are calculated, and with information of what level of precision that is of interest to us, and possibly to nobody else; “grading rules” tells us CGPA will be in the buckets 0, 1, 2, 3, 4, for instance.\n\nThe numerical probability of 1/5 is deduced on the assumptions made; it is therefore the correct probability—given these assumptions. Notice this list of assumptions does not contain all the many things you may also know about GPAs. Many of these bytes of information will be non-quantified and unquantifiable, but if you take cognisance of any of them, they become part of a new model:\n\n(2) Pr(CGPA = 4 | grading rules, E),\n\nwhere E is a compound proposition containing all the semi-formal and informal things (evidence) you know about GPAs, like e.g. grade inflation. This model depends on E, and thus (2) will not likely give quantified or quantifiable answers. Just because our information doesn’t appear in the formal math does not make (2) not a model; or, said another way, our models are often much more than the formal math. If, say, E is only loose notions on the ubiquity of grade inflation, then (2) might equal “More than a 20% chance, I’ll tell you that much.”\n\nTo the data\n\nWe have made use of no observations so far, which proves, if it already wasn’t obvious, that observations are not needed to make probability judgments (which is why frequentism fails philosophically), and that our models are often more reliant upon intelligence not contained in (direct) observation.\n\nBut since this is a statistics-machine learning-artificial intelligence class, let’s bring some numbers in!\n\nLet’s suppose that the only, the sole, the lone observation of past CGPAs was, say, 3. I mean, I have one old observation of CGPA = 3. I want now to compute\n\n(3) Pr(CGPA = 4 | grading rules, old observation).\n\nIntuitively, we expect (3) to decrease from 1/5 to indicate the increased chance of a new CGPA = 3, because if all we saw was an old 3, there might be something special about 3s. That means we actually have this model, and not (3):\n\n(4) Pr(CGPA = 4 | grading rules, old observation, loose math notions).\n\nThere is nothing in the world wrong with model (4); it is the kind of mental model we all use all the time. Importantly, it is not necessarily inferior to this new model:\n\n(5) Pr(CGPA = 4 | grading rules, old observation, fixed math notions),\n\nwhere we move to formally define how all the parts on the right hand side mathematically relate to the left hand side.\n\nHow is this formality conducted?\n\nWell, it can be deduced. Since CGPA can belong only to a fixed, finite set (as “grading rules” insists), we can deduce (5). In what sense? There will be so many future values we want to predict; out of (say) 10 new students, how many As, Bs, etc. are we likely to see and with what chance? This is perfectly doable, but it is almost never done.\n\nThe beautious (you heard me: beautious) thing about this deduction is that no parameters are required in (5) (nor are any “hidden layers”, nor is any “training” needed). And since no parameters are required, no “priors” or arguments about priors crop up, and there is no need of hypothesis testing, parameter estimates, confidence intervals, or p-values. We simply produce the deduced probabilities. Which is what we wanted all along!\n\nIn Uncertainty, I show this deduction when the number of buckets is 2 (here it is 5). For modest n, the result is close to a well-known continuous-parameterized approximation (with “flat prior”), an approximation we’ll use later.\n\nHere (see the book or this link for the derivation) (5) as an approximation works out to be\n\n(5) Pr(CGPA = 4 | GR, n_3 = 1, fixed math) = (1 + n_4)/(n + 5),\n\nwhere n_j is the number of js observed in the old data, and n is the number of old data points; thus the probability of a new CGPA = 4 is 1/6; for a new CGPA = 3 it is 2/6; also “fixed math” has a certain meaning we explore next time. Model (5), then, is the answer we have been looking for!\n\nFormally, this is the posterior predictive distribution for a multinomial model with a Dirichlet prior. It is an approximation, valid fully only at “the limit”. As an approximation, for small n, it will exaggerate probabilities, make them sharper than the exact result. (For that exact result for 2 buckets, see the book. If we used the exact result here the probabilities for future CGPAs would with n=1 remain closer to 1/5.)\n\nNow since most extant code and practice revolves around continuous-parameterized approximations, and we can make do with them, we’ll also use them. But we must always keep in mind, and I’ll remind us often, that these are approximations, and that spats about priors and so forth are always distractions. However, as the approximation is part of our right-hand-side assumptions, the models we deduce are still valid models. How to test which models worked best in our decision is a separate problem we’ll come to.\n\nHomework: think about the differences in the models above, and how all are legitimate. Ambitious students can crack open Uncertainty and use it to track down the deduced solution for more than 2 buckets; cf. page 143. Report back to me.", null, "### 6 replies »\n\n1.", null, "Sheri says:\n\nNew and Improved often means repackaged in an effort to sell to new suckers. It’s not a term I would ever use.\n\nInteresting lesson, though.\n\n2.", null, "Ken says:\n\nBob Kurland made this remark/observation at the end of Part 1:\n\n“…when people do call the researcher to task for misusing statistics, as was done with one AGW proponent, they get sued (as did Mark Steyn and National Review). The prospect of legal entanglement does inhibit calling fakirs out, I would think.”\n\nIt’s a good point — “legal entanglement” and too real in our world.\n\nBriggs says at the end, today: “…think about the differences in the models above, and how all are legitimate.”\n\nThere’s an implicit, potential, flaw there — all models are “legitimate” only to some degree, some limited validity.\n\nBob’s & Briggs’ points too often link, in the real world, in bizarre ways —\n\n– When a decision violates some held belief or value (e.g. global warming is NOT an issue of concern; on emotional themes emotional thinking too often creeps in, an analysis that doesn’t support, but doesn’t refute either, some belief might cause a researcher all kinds of trouble, which is nonsense if the research is merely too limited to reach a determination…but that happens too).\n\n– The “poster case” for such seems to come, of late, from Italy:\n\n— Scientists prosecuted in court for not predicting an earthquake or its severe effects. No model of that phenomena has progressed to a point where such an expectation is even remotely reasonable.\n\n— Prosecutor Mignini’s rabid and nonsensical grounds (sans any DNA or other credible physical evidence) for prosecuting U.S. citizen Knox for a murder, AFTER the actual killer had been convicted and imprisoned. A decision that was overturned (sensibility prevailed…but for how long will that persist?).\n\nA point being made by illustration is, in today’s topsy turvy world, emotional reasoning (a type of mental defect, when it occurs) is gaining ground as valid justifications for legal policies, and, court decisions. …to such an extent that such becomes suitable discussion in a topic area that should be so quantitatively emotionless as stats….\n\n3.", null, "JH says:\n\nIn Uncertainty, I show this deduction when the number of buckets is 2 (here it is 5). For modest n, the result is close to a well-known continuous-parameterized approximation (with “flat prior”), an approximation we’ll use later.\n\nWhy use the approximation derived via a PARAMETRIC model when you have a deductive method?\n\nOld old school probabilists used the non-parametric empirical distribution to estimate the probability of an event or a set.\n\n4.", null, "Briggs says:\n\nWhy use the approximation derived via a PARAMETRIC model when you have a deductive method?\n\nExcellent question. Because we don’t have the math worked out for the deductive method as nearly as well as we do for parametric cases. See the link on the parameters article were we agree with Simpson that the math for finite discrete problems is harder than for continuous problems. Part of this is lack of experience. Once deductive approaches are common, the math won’t seem nearly so hard.\n\nUpdate Another reason to use, at least for now, parametric methods, is software. Lots of it for parametric models which we can repurpose. For deductive methods, we’re largely coding from scratch. This of course slows down adoption. I discuss this more in future lectures.\n\nOld old school probabilists used the non-parametric empirical distribution to estimate the probability of an event or a set.\n\nOld school ones, yes, but not Old Old School ones. As I write about in Uncertainty, classical approaches in probability theory were empirically biased, or logically blind. I gave in logic Lewis Carroll’s French-speaking cat example. There are no real French-speaking cats, but we can still examine the logic of propositions containing them. Logic is the study of the connections between propositions. Probability, too. Probability then can also apply to non-empirical examples. There are in these examples no “non-parametric empirical distributions”. We must use deduction.\n\n5.", null, "JH says:\n\nIf there are observations, one can definitely use empirical probability distribution to estimate the unknown probability . Empirically biased? Logically blind? Are we taking about the same “empirical probility distribution”? Yes, you might say it is biased because we don’t know the true answer to the probability of interest.\n\nSome people say that the so-called principle of ignorance is classical probability. You might want to claim it to be logical, but some people argue it is simply a reaonable choice out of no better choice. And such choice is needed for some reason.\n\nYes, probability can definitely apply to non-empirical examples. No one denies that. Yes, we can deduce the probability given well defined premises. In this case, why would you need DATA science? If one needs data for verification, then one admits the probability is not deduced." ]
[ null, "https://i2.wp.com/wmbriggs.com/wp-content/uploads/2017/06/class.session.jpg", null, "https://secure.gravatar.com/avatar/9745fc0a48efaa32e98256127e536e96", null, "https://secure.gravatar.com/avatar/ec993d01e63cfb21f0f3f50b05e3628e", null, "https://secure.gravatar.com/avatar/63d0414d5f1ed6ff09f05f291e18f15d", null, "https://secure.gravatar.com/avatar/e60891efc97591a5b91903f15bf8e6e8", null, "https://secure.gravatar.com/avatar/aa8e994245cf00d48adac6fe391f27fa", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9353415,"math_prob":0.9254919,"size":11777,"snap":"2020-34-2020-40","text_gpt3_token_len":2665,"char_repetition_ratio":0.11152638,"word_repetition_ratio":0.041853514,"special_character_ratio":0.21992019,"punctuation_ratio":0.121783875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9574623,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T02:01:28Z\",\"WARC-Record-ID\":\"<urn:uuid:28637604-bb38-493e-bc4a-023a2b543cd0>\",\"Content-Length\":\"73507\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28be1f3e-9e49-49fc-bbe5-e1902191925e>\",\"WARC-Concurrent-To\":\"<urn:uuid:43f3a323-de4b-481a-bcfe-a688bd5edbdb>\",\"WARC-IP-Address\":\"208.97.144.72\",\"WARC-Target-URI\":\"https://wmbriggs.com/post/23203/\",\"WARC-Payload-Digest\":\"sha1:ALT2CUY5HHT43S7GGZVWPHGZPGH4Z73J\",\"WARC-Block-Digest\":\"sha1:K4YRO26S6G7BF4AMXIVXSI2VXHPUQ3A2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402130531.89_warc_CC-MAIN-20200930235415-20201001025415-00046.warc.gz\"}"}
https://chem.libretexts.org/Courses/Oregon_Tech_PortlandMetro_Campus/OT_-_PDX_-_Metro%3A_General_Chemistry_II/01%3A_The_Quantum_Mechanical_Model_of_the_Atom
[ "# 1: The Quantum Mechanical Model of the Atom\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\nUnit Objectives\n\nBy the end of this unit, you should be able to:\n\n• Explain the basic behavior of waves, including travelling waves and standing waves\n• Describe the wave nature of light\n• Use appropriate equations to calculate related light-wave properties such as period, frequency, wavelength, and energy\n• Distinguish between line and continuous emission spectra\n• Describe the particle nature of light\n• Extend the concept of wave–particle duality that was observed in electromagnetic radiation to matter as well\n• Calculate deBroglie wavelengths\n\nThumbnail: 3D views of some the 2s atomic orbitals showing probability density and phase. Image used with permission (CC BY-AS 4.0; Geek3).\n\n1: The Quantum Mechanical Model of the Atom is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50061494,"math_prob":0.99918693,"size":1754,"snap":"2022-05-2022-21","text_gpt3_token_len":531,"char_repetition_ratio":0.27657142,"word_repetition_ratio":0.21714285,"special_character_ratio":0.32383123,"punctuation_ratio":0.084033616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9734162,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T18:01:16Z\",\"WARC-Record-ID\":\"<urn:uuid:2366efea-c791-4f95-af78-c6f5f4396a99>\",\"Content-Length\":\"104045\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5bf1d33f-f131-443d-9fa2-11c9ff27f08f>\",\"WARC-Concurrent-To\":\"<urn:uuid:e55011e2-3fda-4450-b813-6717366f4675>\",\"WARC-IP-Address\":\"99.86.224.77\",\"WARC-Target-URI\":\"https://chem.libretexts.org/Courses/Oregon_Tech_PortlandMetro_Campus/OT_-_PDX_-_Metro%3A_General_Chemistry_II/01%3A_The_Quantum_Mechanical_Model_of_the_Atom\",\"WARC-Payload-Digest\":\"sha1:336PWMJHRLYK4R7GK77ZTQCTBMD3M22X\",\"WARC-Block-Digest\":\"sha1:DS2X22IIKWGQEIVSGYWJQN4PLKBPS4JV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662519037.11_warc_CC-MAIN-20220517162558-20220517192558-00786.warc.gz\"}"}
https://stats.stackexchange.com/questions/tagged/regression?tab=newest&page=4
[ "# Questions tagged [regression]\n\nTechniques for analyzing the relationship between one (or more) \"dependent\" variables and \"independent\" variables.\n\n27,272 questions\nFilter by\nSorted by\nTagged with\n128 views\n\n### Why doesn't adding additional explanatory variables in a logistic regression model decrease our primary explanatory variables variance?\n\nImagine a clinical trial setting where we have binary outcome Y and we are interested in the effects of treatment X. Lets say we also have additional explanatory covariates Z and W. Thus our ...\n• 594\n50 views\n\n### MLE to address multicollinearity in linear regression\n\nOLS estimation assumes that the explanatory variables are independent in the linear regression model. There isn't such assumption when using the MLE estimation. So, my question is, can we use MLE to ...\n• 11\n19 views\n\n### OLS R-Squared from Sliced OLS Regression\n\nI have the following question: suppose we have a data set with 3000 observations $(X,y)$ and $X$ can be matrix. So we want to use a bunch of features to predict $y$. Suppose we sliced the data into ...\n• 213\n1 vote\n31 views\n\n### How to test whether there is a significant (general) within group trend with data from many groups\n\nI am having trouble identifying the correct statistical method for the following problem: I have data on a characteristic (e.g. body length) from several individuals per species, distributed in an ...\n• 133\n1 vote\n16 views\n\n### How to interpret the coefficient of a limited independent Variable (Index)?\n\nI assume this is a very simple question, however I am not sure about it. I have a regression table in front of me that contains the coefficients of a linear regression. The dependent variable is ...\n• 95\n15 views\n\n### How to optimize a clinical scoring algorithm?\n\nI've made two studies on clinical data that correlates with a disease. The clinical data can be aggregated into a score, such that the higher the score the higher your % of having the disease. However,...\n• 1,861\n1 vote\n19 views\n\n### Combining/updating parameters from multiple estimations\n\nTake a simple example of performing two independent linear regressions on a set of x-y data, in the form of y = mx + b, each using half of the data. I will obtain two separate estimates for m1, b1, m2,...\n• 11\n46 views\n\n### How do I interpret a regression model when there are impossible additive effects?\n\nLet's say I have a model of count data as a function of the month of the year along with an additive effect of season (factor with 2 levels Wet and Dry which correspond to Jan - June and July to Dec ...\n• 93\n1 vote\n22 views\n\n### What is the difference between lm() function and caret::train() function when it comes to creating linear regression models? [duplicate]\n\nWhen applying the lm function as follows (the assumptions were not considered. The purpose of this example is just to make my question clear) : ...\n• 53\n36 views\n\n### Entropy Balancing and regression\n\nI have a panel data set consisting of a treatment and a control group. The control group contains much more observations than the treatment group. In order to adjust some specific Variables between ...\n• 95\n76 views\n\n### What is the inverse normal transformation (INT) and what are the reasons behind using it?\n\nI noticed a statistical method called inverse normal transformation in the following research article FTO genotype is associated with phenotypic variability of body mass index. I attached the ...\n• 1\n1 vote\n37 views\n\n### Panel data regression with time varying treatments and fixed effects\n\nExperts, I have some trouble concerning my regression model for a panel data analysis. The dataset includes observations of 200 firms over a period of 6 years (2000 - 2005) regarding merger activities ...\n• 95\n1 vote\n26 views\n\n### Question on estimating (OLS) the ATE of RCT with multiple (2) treatments\n\nUpdated: I do not have enough points to comment so... Thank you Ben, you did interpret my question correctly. There are three treatment categories: control, A, and B. Thank you for clarifying that ...\n• 11\n29 views\n\n### How does one deal with linear regression with heteroscedasticity?\n\nSuppose I have a dataset with outcome continuous. I applied various transformations on either covariate, outcome or both. I have also tried polynomial terms. I always get over heteroscedasticity when ...\n• 1,169\n1 vote\n66 views\n\n### Marginal structural model - help with some concepts\n\nI'm trying to gain some (deeper) understanding of MSM's - what exactly they are and when they might be appropriate to use. Are my thoughts on the following correct (please feel free to correct any ...\n• 235\n43 views\n\n### How do we make predictions for future data when you have lagged dependent features used in training?\n\nI am executing a lightGBM model to forecast my units sold (qty) over a period of time. Objective is to run a model for each product group and be able to capture the trends, price elasticity, etc and ...\n34 views\n\n### Anova with stratified Cox model\n\nI'd like to use anova to investigate which variables are important to my Cox model outcome. I tried using anova from ...\n• 335\n30 views\n\n### How to represent the interval or uncertainty on regression predictions in an 'experimental vs predicted' plot?\n\nUsing an example similar to the one from R predict, simulate some independent variable ($x$) data, map them to an observed ...\n• 569\n1 vote\n13 views\n\n### Ordinal regression or Spearman rho\n\nI am a complete beginner in statistics and I am confuse which method is more appropriate. I have two variables; The independent variable is continuous ( hours) while my dependent variable is ordinal ...\n• 11" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91110325,"math_prob":0.77255625,"size":14057,"snap":"2022-40-2023-06","text_gpt3_token_len":3261,"char_repetition_ratio":0.1581157,"word_repetition_ratio":0.03475712,"special_character_ratio":0.23924023,"punctuation_ratio":0.1281216,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9686369,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T04:49:32Z\",\"WARC-Record-ID\":\"<urn:uuid:828ac2de-ba11-4ac5-aa77-9eaef131f97c>\",\"Content-Length\":\"325373\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:735d8329-8f94-4881-bfa0-8ba13db08306>\",\"WARC-Concurrent-To\":\"<urn:uuid:abe838ea-0b80-47da-a284-698a700945aa>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/tagged/regression?tab=newest&page=4\",\"WARC-Payload-Digest\":\"sha1:HYO3VUIA57NKSC6Q6BZBC6GBHIBEQSGH\",\"WARC-Block-Digest\":\"sha1:MYYA6TKPVR55AZZ7ACLNJB2MA24X4YUG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337473.26_warc_CC-MAIN-20221004023206-20221004053206-00177.warc.gz\"}"}
https://jubilatka.net/great-value-ypxt/fe9f86-atomic-mass-formula
[ "The following formula is used to calculate the average atomic mass of a substance. More Videos. Formula to calculate formula mass. The average atomic mass of an element can be found on the periodic table, typically under the elemental symbol. Atomic Mass Formula. Atomic mass is the sum of all of the protons, electrons, and neutrons in an atom. The process above can be generalized with the following formula: Here, m is the mass of a specific isotope, and f is the fractional abundance. This is a good check to … chlorine-35: atomic mass = 34.969 amu and % abundance = 75.77% Solution: Ans: the atomic mass of the metal is 208.98 and its equivalent mass is 69.66. By adding together the number of protons and neutrons and multiplying by 1 amu, you can calculate the mass of the atom. This is basically the weighted average of the isotopes for a particular element. While the mass number is the sum of the protons and neutrons in an atom, the atomic number is only the number of protons. Known . Example: convert 15 u to g: 15 u = 15 × 1.6605402E-24 g = 2.4908103E-23 g. Popular Weight And Mass … The atomic number is the value found associated with an element on the periodic table because it is the key to the element's identity. Average atomic mass = f 1 M 1 + f 2 M 2 +… + f n M n where f is the fraction representing the natural abundance of the isotope and M is the mass number (weight) of the isotope. As long as all values of f add up to one, you're good to go. This list contains the 118 elements of chemistry. click on any element's name for further information on chemical properties, environmental data or health effects.. Sample Problem: Calculating Atomic Mass . This is also sometimes used to talk about the average mass of a group of atoms. Neutrons and Protons make up most of the mass of the atom, in fact, electrons are so light that they aren’t used in mass … In this scale 1 atomic mass unit (amu) corresponds to 1.660539040 × 10−24 Average Atomic Mass Formula. AM = f 1 M 1 + f 2 M 2 +… + f n M n. Where AM is the average atomic mass In chemistry, Isotopes are the atoms having similar atomic number (no of protons) but number of neutrons are generally different. Chemical elements listed by atomic mass The elements of the periodic table sorted by atomic mass. 1 u = 1.6605402E-24 g 1 g = 6.0221366516752E+23 u. What is the formula for atomic mass? This is because each proton and each neutron weigh one atomic mass unit (amu). The atomic mass of carbon is 12 while the atomic mass of oxygen is 16, therefore the formula mass of CO is: 12 + 16 = 28 Example – 15: 1 g of metallic bromide dissolved in water gave with the excess of silver nitrate, 1.88 g of silver bromide. ››More information on molar mass and molecular weight. Formula mass (M) is used for a substance made up of molecules. In chemistry, the formula weight is a quantity computed by multiplying the atomic weight (in atomic mass units) of each element in a chemical formula by the number of atoms of that element present in the formula, then adding all … It is expressed as a multiple of one-twelfth the mass of the carbon-12 atom, 1.992646547 × 10−23 gram, which is assigned an atomic mass of 12 units. How to Convert Atomic Mass Unit to Gram. Atomic mass, the quantity of matter contained in an atom of an element. Calculate the accurate atomic mass of the element, if its specific heat is 0.15 cal (atomic mass of Ag is 108 and that of bromine is 80). Step 1: List the known and unknown quantities and plan the problem. Use the atomic masses of each of the two isotopes of chlorine along with their percent abundances to calculate the average atomic mass of chlorine. Example: Calculate the formula mass of Carbon (i) Oxide. Be careful you don't confuse atomic number and mass number. Atomic Number and Mass Number . For any given isotope, the sum of the numbers of protons and neutrons in the nucleus is called the mass number. Carbon (i) Oxide is a compound made of two element; carbon and oxygen. What is atomic mass? The Average Atomic Mass Formula. The numbers of protons ) but number of neutrons are generally different step 1: List known. ( i ) Oxide 2.4908103E-23 g. Popular Weight and mass number Ans: the atomic mass a. This is basically the weighted average of the metal is 208.98 and its equivalent mass is 69.66 to the. Protons and neutrons in the nucleus is called the mass number but number of neutrons are different. Plan the problem do n't confuse atomic atomic mass formula ( no of protons ) number!: 15 u = 15 × 1.6605402E-24 g = 6.0221366516752E+23 u example: calculate the average mass! ) Oxide, typically under the elemental symbol a particular element click on any element 's for... 2.4908103E-23 g. Popular Weight and mass mass unit ( amu ) to … Ans: the mass. As long as all values of f add up to one, you can the! Ans: the atomic mass of the numbers of protons and neutrons and multiplying by 1 amu, you calculate. Used to talk about the average atomic mass of a group of atoms convert 15 =! Given isotope, the sum of the metal is 208.98 and its mass., isotopes are the atoms having similar atomic number and mass are the atoms having atomic... Each neutron weigh one atomic mass of the metal is 208.98 and its equivalent mass is 69.66 of. = 2.4908103E-23 g. Popular Weight and mass adding together the number of protons but... And its equivalent mass is 69.66 amu ) sum of the metal is and. The number of protons ) but number of neutrons are generally different n't confuse atomic number and …... A particular element is a compound made of two element ; carbon oxygen! N'T confuse atomic number ( no of protons and neutrons in the nucleus is called the mass of a of! Found on the periodic table, typically under the elemental symbol adding the. Up of molecules the formula mass of the isotopes for a substance 1 =! Atomic mass of carbon ( i ) Oxide formula is used for particular... Element ; carbon and oxygen and mass atoms having similar atomic number ( no of protons ) but of. Amu ) atoms having similar atomic number and mass, environmental data or health effects the known and quantities. Weigh one atomic mass of the atom do n't confuse atomic number and mass number substance made up of.! The problem element can be found on the periodic table, typically under elemental! Long as all values of f add up to one, you 're good to go on... Amu, you 're good to go: convert 15 u = 1.6605402E-24 g 1 =... Atoms having similar atomic number and mass number 6.0221366516752E+23 u and oxygen but number neutrons! Of an element can be found on atomic mass formula periodic table, typically under the elemental.. ) Oxide is a compound made of two element ; carbon and oxygen you can calculate the mass. Weight and mass values of f add up to one, you 're good go! Substance made up of molecules is also sometimes used to calculate the average atomic mass an... Oxide is a compound made of two element ; carbon and oxygen: 15 u = g! 1.6605402E-24 g = 6.0221366516752E+23 u each proton and each neutron weigh one atomic of. 'S name for further information on chemical properties, environmental data or health..... Chemical properties, environmental data or health effects one, you 're good go... Found on the periodic table, atomic mass formula under the elemental symbol by adding the... Called the mass number the elemental symbol × 1.6605402E-24 g = 2.4908103E-23 g. Popular Weight and mass.. Having similar atomic number ( no of protons and neutrons and multiplying by 1 amu, you good! G 1 g = 2.4908103E-23 g. Popular Weight and mass number plan the problem an. 1 amu, you 're good to go and each neutron weigh one atomic mass unit ( amu.... The following formula is used for a substance made up of molecules g: 15 u 15.: the atomic mass of a group of atoms 1 u = 15 × g! By 1 amu, you 're good to go environmental data or health..! The formula mass of an element can be found on the periodic table, typically the... Calculate the formula mass ( M ) is used for a particular element sum of the metal is 208.98 its. To one, you 're good to go weigh one atomic mass of an element can be found on periodic., environmental data or health effects to go the periodic table, typically under the symbol! To g: 15 u to g: 15 atomic mass formula to g: 15 u 15! Convert 15 u to g: 15 u = 15 × 1.6605402E-24 g = 2.4908103E-23 g. Popular and... = 1.6605402E-24 g = 2.4908103E-23 g. Popular Weight and mass number chemical properties, environmental or! Known and unknown quantities and plan the problem by 1 amu, you can calculate mass. A good check to … Ans: the atomic mass unit ( amu ) careful you n't! = 1.6605402E-24 g 1 g = 6.0221366516752E+23 u … Ans: the atomic mass of a group of.... Of a group of atoms ( M ) is used to calculate the mass the... Nucleus is called the mass of a group of atoms particular element chemical properties, environmental or!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8501354,"math_prob":0.9935261,"size":9202,"snap":"2021-04-2021-17","text_gpt3_token_len":2331,"char_repetition_ratio":0.18275712,"word_repetition_ratio":0.25310466,"special_character_ratio":0.27591828,"punctuation_ratio":0.14125445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996449,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-22T13:53:51Z\",\"WARC-Record-ID\":\"<urn:uuid:79fcf216-e4c8-4a6b-b57f-a41521744d3d>\",\"Content-Length\":\"76109\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3dd82f55-c1dd-4c60-a20d-7078ae9d1e19>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec7a61ab-3763-4f57-8e2e-61b2edc0a652>\",\"WARC-IP-Address\":\"178.32.205.96\",\"WARC-Target-URI\":\"https://jubilatka.net/great-value-ypxt/fe9f86-atomic-mass-formula\",\"WARC-Payload-Digest\":\"sha1:TMVWZCEKEWWPINHDPURDCQPGUHI3V57W\",\"WARC-Block-Digest\":\"sha1:FWV4ZJEZRDDREZVMEG74GU3T355AWRRM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039610090.97_warc_CC-MAIN-20210422130245-20210422160245-00127.warc.gz\"}"}
https://www.powershow.com/view/24cae6-ODc3Z/Quintessence_from_time_evolution_of_fundamental_mass_scale_powerpoint_ppt_presentation
[ "# Quintessence from time evolution of fundamental mass scale - PowerPoint PPT Presentation\n\nPPT – Quintessence from time evolution of fundamental mass scale PowerPoint presentation | free to download - id: 24cae6-ODc3Z", null, "The Adobe Flash plugin is needed to view this content\n\nGet the plugin now\n\nView by Category\nTitle:\n\n## Quintessence from time evolution of fundamental mass scale\n\nDescription:\n\n### New long - range interaction. cosmon mass changes with time ! for ... ii) equation of state wh(today) -1. B) Time variation of fundamental 'constants' ... – PowerPoint PPT presentation\n\nNumber of Views:47\nAvg rating:3.0/5.0\nSlides: 77\nProvided by: cwett\nCategory:\nTags:\nTranscript and Presenter's Notes\n\nTitle: Quintessence from time evolution of fundamental mass scale\n\n1\nQuintessence from time evolution of fundamental\nmass scale\n2\n• Om X 1\n• Om 30\n• Oh 70\n• Dark Energy\n\n?\n3\nQuintessence\n• C.Wetterich\n\nA.Hebecker,M.Doran,M.Lilley,J.Schwindt, C.Müller,G\n.Schäfer,E.Thommes, R.Caldwell,M.Bartelmann\n4\nDark Energy dominates the Universe\n• Energy - density in the Universe\n• Matter Dark Energy\n• 30 70\n\n5\nMatter Everything that clumps\nAbell 2255 Cluster 300 Mpc\n6\nOm 0.3\ngravitational lens , HST\n7\nOtot1\n8\nDark Energy\n• Om X 1\n• Om 30\n• Oh 70 Dark Energy\n\nh homogenous , often O? instead of Oh\n9\nSpace between clumps is not empty Dark Energy\n!\n10\nDark Energy density isthe same at every point of\nspace homogeneous\n11\nTwo important predictions\n• Structure formation One primordial\n• fluctuation- spectrum\n• The expansion of the Universe\n• accelerates today !\n\n12\nconsistent cosmological model !\n13\nWhat is Dark Energy ? Cosmological Constant\nor Quintessence ?\n14\nCosmological Constant- Einstein -\n• Constant ? compatible with all symmetries\n• No time variation in contribution to energy\ndensity\n• Why so small ? ?/M4 10-120\n• Why important just today ?\n\n15\nCosm. Const. Quintessence\nstatic dynamical\n16\nQuintessence and solution of cosmological\nconstant problem should be related !\n17\nCosmological mass scales\n• Energy density\n• ? ( 2.410 -3 eV )- 4\n• Reduced Planck mass\n• M2.441018GeV\n• Newtons constant\n• GN(8pM²)\n\nOnly ratios of mass scales are observable !\nhomogeneous dark energy ?h/M4 6.5 10¹²¹\nmatter\n?m/M4 3.5 10¹²¹\n18\nTime evolution\nt² matter dominated universe t3/2\n• ?m/M4 a³\n• ?r/M4 a4 t -2 radiation dominated\nuniverse\n• Huge age small ratio\n• Same explanation for small dark energy ?\n\n19\nTime dependent Dark Energy Quintessence\n• What changes in time ?\n• Only dimensionless ratios of mass scales\n• are observable !\n• V potential energy of scalar field or\ncosmological constant\n• V/M4 is observable\n• Imagine the Planck mass M increases\n\n20\nQuintessence from time evolution of fundamental\nmass scale\n21\nFundamental mass scale\n• Fixed parameter or dynamical scale ?\n• Dynamical scale Field\n• Dynamical scale compared to what ?\n• momentum versus mass\n• ( or other parameter with dimension )\n\n22\nCosmon and fundamental mass scale\n• Assume all mass parameters are proportional to\nscalar field ? (GUTs, superstrings,)\n• Mp ? , mproton ? , ?QCD ? , MW ? ,\n• ? may evolve with time cosmon\n• mn/M ( almost ) constant - observation !\n• Only ratios of mass scales are observable\n\n23\nExample Field ? denotes scale of\ntransition from higher dimensional physics to\neffective four dimensional description in theory\nwithout fundamental mass parameter (except for\nrunning of dimensionless couplings)\n24\nDilatation symmetry\n• Lagrange density\n• Dilatation symmetry for\n• Conformal symmetry for d0\n\n25\nDilatation anomaly\n• Quantum fluctuations responsible for\n• dilatation anomaly\n• Running couplings hypothesis\n• Renormalization scale µ ( momentum scale )\n• ?(?/µ) A\n• E gt 0 crossover Quintessence\n\n26\nDilatation anomaly and quantum fluctuations\n• Computation of running couplings ( beta functions\n) needs unified theory !\n• Dominant contribution from modes with momenta ?\n!\n• No prejudice on natural value of anomalous\ndimension should be inferred from tiny\ncontributions at QCD- momentum scale !\n\n27\nCosmology\n• Cosmology ? increases with time !\n• ( due to coupling of ? to curvature scalar )\n• for large ? the ratio V/M4 decreases to zero\n• Effective cosmological constant vanishes\nasymptotically for large t !\n\n28\nAsymptotically vanishing effective cosmological\nconstant\n• Effective cosmological constant V/M4\n• ? (?/µ) A\n• V (?/µ) A ?4\n• M ?\n• V/M4 (?/µ) A\n\n29\nWeyl scaling\n• Weyl scaling gµ?? (M/?)2 gµ? ,\n• f/M ln (? 4/V(?))\n• Exponential potential V M4 exp(-f/M)\n• No additional constant !\n\n30\nWithout dilatation anomaly V const.\nMassless Goldstone boson dilaton Dilatation\nanomaly V (f ) Scalar with tiny time dependent\nmass cosmon\n31\nCrossover Quintessence\n\n• ( like QCD gauge coupling)\n• critical ? where d grows large\n• critical f where k grows large\nk²(f )d(?)/4\n• k²(f ) 1/(2E(fc f)/M)\n• if j c 276/M ( tuning ! )\n• This will be responsible for relative increase\nof dark energy in present cosmological epoch\n\n32\nRealistic cosmology\n• Hypothesis on running couplings\n• yields realistic cosmology\n• for suitable values of A , E , fc\n\n33\nQuintessence\n• Dynamical dark energy ,\n• generated by scalar field\n• (cosmon)\n\nC.Wetterich,Nucl.Phys.B302(1988)668,\n24.9.87 P.J.E.Peebles,B.Ratra,ApJ.Lett.325(1988)L1\n7, 20.10.87\n34\nPrediction homogeneous dark energyinfluences\nrecent cosmology- of same order as dark matter -\nOriginal models do not fit the present\nobservations . Modifications ( i.e. E gt 0 )\n35\nQuintessence\nCosmon Field f(x,y,z,t)\n• Homogeneous und isotropic Universe\nf(x,y,z,t)f(t)\n• Potential und kinetic energy of the cosmon -field\n• contribute to a dynamical energy density of the\nUniverse !\n\n36\nFundamental Interactions\nStrong, electromagnetic, weak interactions\nOn astronomical length scales graviton cosm\non\ngravitation\ncosmodynamics\n37\nDynamics of quintessence\n• Cosmon j scalar singlet field\n• Lagrange density L V ½ k(f) j j\n• (units reduced Planck mass M1)\n• Potential Vexp-j\n• Natural initial value in Planck era j0\n• today j276\n\n38\nQuintessence models\n• Kinetic function k(f) parameterizes the\n• details of the model - kinetial\n• k(f) kconst. Exponential\nQ.\n• k(f ) exp ((f f1)/a) Inverse power\nlaw Q.\n• k²(f ) 1/(2E(fc f)) Crossover Q.\n• possible naturalness criterion\n• k(f0)/ k(ftoday) not tiny or huge !\n• - else explanation needed -\n\n39\nCosmon\n• Scalar field changes its value even in the\npresent cosmological epoch\n• Potential und kinetic energy of cosmon contribute\nto the energy density of the Universe\n• Time - variable dark energy\n• ?h(t) decreases with time !\n\n40\nCosmon\n• Tiny mass\n• mc H\n• New long - range interaction\n\n41\ncosmon mass changes with time !\n• for standard kinetic term\n• mc2 V\n• for standard exponential potential , k\nconst.\n• mc2 V/ k2 V/( k2 M2 )\n• 3 Oh (1 - wh ) H2 /( 2 k2 )\n\n42\nRealistic model Crossover Quintessence\n\n• ( like QCD gauge coupling)\n• critical ? where d grows large\n• critical f where k grows large\nk²(f )d(?)/4\n• k²(f ) 1/(2E(fc f)/M)\n• if j c 276/M ( tuning ! )\n• Relative increase of dark energy in\npresent\n• cosmological epoch\n\n43\nQuintessence becomes important today\n44\nEquation of state\n• pT-V pressure\nkinetic energy\n• ?TV energy density\n• Equation of state\n• Depends on specific evolution of the scalar field\n\n45\nNegative pressure\n• w lt 0 Oh increases (with decreasing\nz )\n• w lt -1/3 expansion of the Universe is\n• accelerating\n• w -1 cosmological constant\n\nlate universe with small radiation component\n46\nsmall early and large presentdark energy\n• fraction in dark energy has substantially\nincreased since end of structure formation\n• expansion of universe accelerates in present\nepoch\n\n47\nQuintessence becomes important today\nNo reason why w should be constant in time !\n48\nHow can quintessence be distinguished from a\ncosmological constant ?\n49\nTime dependence of dark energy\ncosmological constant Oh t² (1z)-3\nM.Doran,\n50\nMeasure Oh(z) !\n51\nEarly dark energy\n• A few percent in the early Universe\n• Not possible for a cosmological constant\n\n52\nEarly quintessence slows down the growth of\nstructure\n53\nA few percent Early Dark Energy\n• If linear power spectrum fixed today ( s8 )\n• More Structure at high z !\n\nBartelmann,Doran,\n54\nHow to distinguish Q from ? ?\n• A) Measurement Oh(z) H(z)\n• i) Oh(z) at the time of\n• structure formation , CMB - emission\n• or nucleosynthesis\n• ii) equation of state wh(today) gt -1\n• B) Time variation of fundamental constants\n• C) Apparent violation of equivalence principle\n\n55\nQuintessence and time variation of fundamental\nconstants\nStrong, electromagnetic, weak interactions\nGeneric prediction Strength unknown\nC.Wetterich , Nucl.Phys.B302,645(1988)\ngravitation\ncosmodynamics\n56\nTime varying constants\n• It is not difficult to obtain quintessence\npotentials from higher dimensional or string\ntheories\n• Exponential form rather generic\n• ( after Weyl scaling)\n• But most models show too strong time dependence\nof constants !\n\n57\nAre fundamental constantstime dependent ?\n• Fine structure constant a (electric charge)\n• Ratio nucleon mass to Planck mass\n\n58\nQuintessence and Time dependence of\nfundamental constants\n• Fine structure constant depends on value of\n• cosmon field a(f)\n• (similar in standard model couplings depend\non value of Higgs scalar field)\n• Time evolution of f\n• Time evolution of a\n\nJordan,\n59\nStandard Model of electroweak interactions\nHiggs - mechanism\n• The masses of all fermions and gauge bosons are\nproportional to the ( vacuum expectation ) value\nof a scalar field fH ( Higgs scalar )\n• For electron, quarks , W- and Z- bosons\n• melectron helectron fH\netc.\n\n60\nRestoration of symmetryat high temperature in\nthe early Universe\nhigh T less order more symmetry example magn\nets\nHigh T SYM ltfHgt0\nLow T SSB ltfHgtf0 ? 0\n61\nIn the hot plasma of the early Universe No\ndifference in mass for electron and myon !\n62\n(No Transcript)\n63\nQuintessence Couplings are still varying now\n!Strong bounds on the variation of couplings\n-interesting perspectives for observation !\n64\nAbundancies of primordial light elements from\nnucleosynthesis\nA.Coc\n65\nif present 2-sigma deviation of He\nabundance from CMB/nucleosynthesis prediction\nwould be confirmed\n?a/a ( z1010 ) -1.0 10-3 GUT 1 ?a/a (\nz1010 ) -2.7 10-4 GUT 2\nC.Mueller,G.Schaefer,\n66\nTime variation of coupling constants\nmust be tiny would be of very high\nsignificance ! Possible signal for\nQuintessence\n67\n?a?ta ?e?\nEverything is flowing\n68\nSummary\n• Oh 0.7\n• Q/? dynamical und static dark energy\n• will be distinguishable\n• Q time varying fundamental coupling\nconstants\n• violation of equivalence principle\n\n69\n????????????????????????\n• Why becomes Quintessence dominant in the present\ncosmological epoch ?\n• Are dark energy and dark matter related ?\n• Can Quintessence be explained in a fundamental\nunified theory ?\n\n70\nEnd\n71\nA few references C.Wetterich ,\n24.9.1987 P.J.E.Peebles,B.Ratra ,\n20.10.1987 B.Ratra,P.J.E.Peebles ,\n16.2.1988 J.Frieman,C.T.Hill,A.Stebbins,I.Waga ,\nPhys.Rev.Lett.75,2077(1995) P.Ferreira, M.Joyce\n, Phys.Rev.Lett.79,4740(1997) C.Wetterich ,\nAstron.Astrophys.301,321(1995) P.Viana, A.Liddle\n, Phys.Rev.D57,674(1998) E.Copeland,A.Liddle,D.Wa\nnds , Phys.Rev.D57,4686(1998) R.Caldwell,R.Dave,P\n.Steinhardt , Phys.Rev.Lett.80,1582(1998) P.Stein\nhardt,L.Wang,I.Zlatev , Phys.Rev.Lett.82,896(1999)\n72\nCosmodynamics\n• Cosmon mediates new long-range interaction\n• Range size of the Universe horizon\n• Strength weaker than gravity\n• photon electrodynamics\n• graviton gravity\n• cosmon cosmodynamics\n• Small correction to Newtons law\n\n73\nViolation of equivalence principle\n• Different couplings of cosmon to proton and\nneutron\n• Differential acceleration\n• Violation of equivalence principle\n\np,n\nearth\ncosmon\np,n\nonly apparent new fifth force !\n74\nDifferential acceleration ?\n• For unified theories ( GUT )\n\n??a/2a\nQ time dependence of other parameters\n75\nLink between time variation of a and\nviolation of equivalence principle\ntypically ? 10-14 if\ntime variation of a near Oklo upper bound\nto be tested by MICROSCOPE\n76\nVariation of fine structure constant as function\nof redshift\n• Three independent data sets from Keck/HIRES\n• ?a/a - 0.54 (12) 10-5\n• Murphy,Webb,Flammbaum, june\n2003\n• VLT\n• ?a/a - 0.06 (6) 10-5\n• Srianand,Chand,Petitje\nan,Aracil, feb.2004\n\nz 2" ]
[ null, "https://www.powershow.com/themes/default/images/loading-slideshow.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61120874,"math_prob":0.8566277,"size":11500,"snap":"2019-13-2019-22","text_gpt3_token_len":3279,"char_repetition_ratio":0.1322199,"word_repetition_ratio":0.058089033,"special_character_ratio":0.2696522,"punctuation_ratio":0.15756404,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9533218,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-23T14:24:36Z\",\"WARC-Record-ID\":\"<urn:uuid:70a904e7-e743-4ae1-bf1d-d597ea23fa6f>\",\"Content-Length\":\"106643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d70b78c6-1b03-4607-a07f-98280b2fd6d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a3dc4fd-1030-49e1-8801-9fcd9843661b>\",\"WARC-IP-Address\":\"209.128.81.248\",\"WARC-Target-URI\":\"https://www.powershow.com/view/24cae6-ODc3Z/Quintessence_from_time_evolution_of_fundamental_mass_scale_powerpoint_ppt_presentation\",\"WARC-Payload-Digest\":\"sha1:CK3WQTMIAVPD34JMJNFC7TMWC5JYI3DR\",\"WARC-Block-Digest\":\"sha1:OUPLLTT2QNN5NUYBHN6TMEYPM7DAE2CU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202872.8_warc_CC-MAIN-20190323141433-20190323163433-00125.warc.gz\"}"}
https://math.stackexchange.com/questions/281961/radius-of-convergence-for-the-exponential-function
[ "# Radius of convergence for the exponential function\n\nI'm studying physics and am currently following a course on complex analysis and in the section on analytic functions, the radius of convergence $R$ for power series was introduced. The Taylor expansion around $z_0=0$ for the exponential function was considered as an example of a power series with $R\\rightarrow\\infty$. The notes state this can be proved by using Weierstrass' Criterion for uniform convergence, which I'll state in my own words:\n\nConsider a series\n\n$\\sum\\limits_{k=0}^{\\infty} f_k(z)$.\n\nIf you know numbers $a_k$ for which\n\n$|f_k(z)| < a_k$\n\nfor all $z$, and\n\n$\\sum\\limits_{k=0}^{\\infty} a_k$\n\nconverges uniformly, then also\n\n$\\sum\\limits_{k=0}^{\\infty} f_k(z)$\n\nconverges uniformly.\n\nFor the exponential, we have the power series\n\n$e^z = \\sum\\limits_{k=0}^{\\infty}\\dfrac{z^k}{k!}$.\n\nNow I've been thinking about this, but I can't seem to think of a uniformly converging series of $a_k$'s that bound the terms of this power series. Perhaps this is really straightforward and I wouldn't have any difficulties with it if I remembered my course on real analysis a bit better...\nIt's not a homework problem and series convergence is not a main goal in this course, but it's been bugging me that I don't understand why Weierstrass's Criterion proves that the radius of convergence goes to infinity for the exponential, so I thought I'd ask here. Thanks in advance.\n\n• The constants $a_k$ depend on $R$. You need to show that you can find the sequence $a_k$ for each given $R$. (I'm guessing you're trying to find one sequence of $a_k$ that works for all $R$. That's not possible.) Jan 19, 2013 at 13:16\n\nYou need to be careful in making the distinction between uniform convergence for any $z \\in \\mathbb{C}$ and uniform convergence for $|z| < R$ for a fixed $R$.\nThe exponential function is unbounded. If we were able to find a sequence $a_k$ so that for any $z \\in \\mathbb{C}$ $$\\left|e^z\\right| = \\left|\\sum_{k=0}^\\infty \\dfrac{z^k}{k!}\\right| \\le \\sum_{k=0}^\\infty \\left|\\dfrac{z^k}{k!}\\right| \\le \\sum_{k=0}^\\infty a_k < \\infty$$\nNow, if we limit the domain of the exponential function to $|z| < R$ for a fixed $R > 0$, then: $$\\sum_{k=0}^\\infty \\left|\\dfrac{z^k}{k!}\\right| \\le \\sum_{k=0}^\\infty \\dfrac{R^k}{k!} < \\infty$$\nThe convergence of $\\displaystyle \\sum_{k=0}^\\infty \\frac{R^k}{k!}$ can be established via the ratio test. Thus, the exponential function is uniformly convergent for any fixed $R > 0$ no matter how big it is.\n• Ahh, beautiful :) Thank you for this clear explanation, @Tunococ was right that I was looking for one sequence of $a_k$ for all $R$. Jan 19, 2013 at 13:35" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93769556,"math_prob":0.99755603,"size":1365,"snap":"2023-40-2023-50","text_gpt3_token_len":360,"char_repetition_ratio":0.11241734,"word_repetition_ratio":0.0,"special_character_ratio":0.24761905,"punctuation_ratio":0.08171206,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999964,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T16:37:47Z\",\"WARC-Record-ID\":\"<urn:uuid:9a548c5c-52bd-4df2-b25c-b5a5def1dc8f>\",\"Content-Length\":\"142382\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8838453f-6410-43a3-be4f-95e5be87b941>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e561bf4-3e64-4d38-8bbd-6f63ad5b2198>\",\"WARC-IP-Address\":\"104.18.11.86\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/281961/radius-of-convergence-for-the-exponential-function\",\"WARC-Payload-Digest\":\"sha1:SPXLT2FGUIS7BGT5ZGK3CTEAXSQVJPPS\",\"WARC-Block-Digest\":\"sha1:FEMN43EPQEYD5DO7RPWWD66RXMSRPQCO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510697.51_warc_CC-MAIN-20230930145921-20230930175921-00448.warc.gz\"}"}
https://www.nagwa.com/en/videos/907152737653/
[ "# Lesson Video: Applications of Newton’s Second Law: Inclined Pulley Mathematics\n\nIn this video, we will learn how to solve problems on the motion of two bodies connected by a string passing over a smooth pulley with one of them on an inclined plane.\n\n17:39\n\n### Video Transcript\n\nIn this video, we will learn how to solve problems on the motion of two bodies connected by a string passing over a smooth pulley, with one of them on an inclined plane. We will study problems where the plane is both smooth and rough. To begin this video, we will recall Newton’s second law of motion.\n\nNewton’s second law states that the net force on an object is equal to its mass multiplied by its acceleration. In this video, we will be dealing with scalar quantities and use the equation 𝐹 equals 𝑚𝑎, where 𝐹 is the sum of the net forces, 𝑚 is the mass, and 𝑎 the acceleration. We will measure acceleration in meters per square second. The mass will be measured in kilograms. Multiplying these gives us kilogram meters per square second. One kilogram meter per square second is equal to one newton. And this is the unit we’ll use to measure our forces.\n\nWe will now consider what the problem will look like when the plane is smooth. We will have two bodies 𝐴 and 𝐵 connected by a light inextensible string passing over a smooth pulley. This means that the tension in the string will be equal throughout. Both bodies will have a force going vertically downwards equal to its weight. This will be equal to the mass of the body multiplied by gravity. There will be a reaction force perpendicular to the plane. However, as the plane is smooth, there will be no frictional force on it.\n\nAs the string is inextensible, when the system is released, it will accelerate uniformly. We will then use Newton’s second law to resolve for body 𝐴 and body 𝐵. In the case of body 𝐵, we will resolve vertically. For body 𝐴, we will resolve parallel to the plane. In order to do this, we will need to know the angle of inclination, in this case 𝛼. Using our knowledge of right angle trigonometry, we can identify the component of the weight of body 𝐴 parallel and perpendicular to the plane. The force acting parallel to the plane is equal to 𝑀𝑔 multiplied by sin 𝛼, and the force perpendicular to the plane is equal to 𝑀𝑔 multiplied by cos 𝛼.\n\nIf body 𝐵 is accelerating downwards and we consider this to be the positive direction, the sum of the forces are equal to 𝑀𝑔 minus 𝑇. This is equal to the mass of body 𝐵 multiplied by the acceleration. Body 𝐴 is moving up the plane. If this is the positive direction, the sum of the forces acting parallel to the plane are equal to 𝑇 minus 𝑚𝑔 sin 𝛼. This is equal to the mass of body 𝐴 multiplied by the acceleration.\n\nThis will give us two simultaneous equations that we can use to calculate any unknowns. We will now look at a couple of examples.\n\nA body of mass five kilograms rests on a smooth plane inclined at an angle of 35 degrees to the horizontal. It is connected by a light inextensible string passing over a smooth pulley fixed at the top of the plane to another body of mass 19 kilograms hanging freely vertically below the pulley. Given that the acceleration due to gravity 𝑔 is equal to 9.8 meters per square second, determine the acceleration of the system.\n\nWe will begin by sketching the system. We are told that the smooth plane is inclined at an angle of 35 degrees. The masses of the two bodies are five kilograms and 19 kilograms. This means that they will have a vertical downward force equal to five 𝑔 and 19𝑔, respectively, where the gravity 𝑔 is equal to 9.8 meters per square second.\n\nThe pulley is smooth, and the string is light and inextensible. This means that the tension in the string will be equal throughout. When the system is released, the magnitude of acceleration will also be constant. As the plane is smooth, there will be no frictional force. In order to solve this problem, we will use Newton’s second law, which states that the sum of the net forces is equal to the mass multiplied by the acceleration. For the body on the plane, we will resolve parallel to the plane. And for the body hanging freely, we will resolve vertically.\n\nThe five 𝑔 force is neither parallel nor perpendicular to the plane. Therefore, we need to calculate these two components. Using our knowledge of right angle trigonometry, we see that the perpendicular component is equal to five 𝑔 multiplied by cos of 35 degrees and the component parallel to the plane is equal to five 𝑔 multiplied by sin of 35 degrees.\n\nBody 𝐴 is moving up the plane. If we assume this is the positive direction, the sum of its forces is equal to 𝑇 minus five 𝑔 multiplied by sin 35. This is equal to five 𝑎 as the mass of the object is five kilograms. As object 𝐵 is accelerating downwards, the sum of its forces is equal to 19𝑔 minus 𝑇. This is equal to 19𝑎. We now have two simultaneous equations that we can solve to determine the acceleration of the system. By adding equation one and equation two, we eliminate the tension 𝑇.\n\nThe left-hand side becomes 19𝑔 minus five 𝑔 multiplied by sin of 35 degrees. And the right-hand side becomes 24𝑎. We can then divide both sides of this equation by 24. Typing this into the calculator gives us a value of 𝑎 equal to 6.5872 and so on. Rounding this to two decimal places gives us an answer of 6.59 meters per square second. We could substitute this value back into equation one or equation two to calculate the tension 𝑇. However, this is not required in this question.\n\nWe will now consider what happens when the surface is rough. Recall in the diagram we saw earlier for a smooth plane we used Newton’s second law to create two equations. We resolved vertically for the freely hanging object and resolved parallel to the plane for the object on the inclined plane. Let’s now consider what happens when the plane is rough.\n\nWhen body 𝐴 is accelerating up the plane, there will be a frictional force acting downwards parallel to the plane. This means that we have a tension force acting in the positive direction and two forces acting in the negative direction. The sum of the net forces parallel to the plane is equal to 𝑇 minus 𝑚𝑔 multiplied by sin 𝛼 minus the frictional force 𝐹 𝑟. This is equal to the mass multiplied by the acceleration.\n\nWe know that when dealing with a frictional force, it is equal to 𝜇, the coefficient of friction, multiplied by the normal reaction force 𝑅, when 𝜇 is a constant that lies between zero and one inclusive. If we resolve perpendicular to the plane, we see that the sum of the net forces is equal to 𝑅 minus 𝑚𝑔 multiplied by cos 𝛼. However, the body is not moving in this direction. Therefore, the acceleration is equal to zero. 𝑅 minus 𝑚𝑔 multiplied by cos 𝛼 is equal to zero.\n\nThis can be rewritten as 𝑅 is equal to 𝑚𝑔 multiplied by cos 𝛼. When solving any problem of this type on a rough plane, we can use our three equations to calculate any unknowns, together with the formula that links the frictional force and the normal reaction force. We will now look at an example of this type.\n\nA body 𝐴 of mass 240 grams rests on a rough plane inclined to the horizontal at an angle whose sine is three-fifths. It is connected by a light inextensible string passing over a smooth pulley fixed to the top of the plane to another body 𝐵 of mass 300 grams. If the system was released from rest and body 𝐵 descended 196 centimeters in three seconds, find the coefficient of friction between the body and the plane. Take 𝑔 equal to 9.8 meters per square second.\n\nWe begin by sketching the system. We are told that the sin of angle 𝛼 is equal to three-fifths. Using our knowledge of right angle trigonometry and the Pythagorean triple three, four, five, we know that cosine of 𝛼, or cos of 𝛼, is equal to four-fifths. The mass of the two bodies is given in grams. We know that there are 1000 grams in one kilogram. This means that 240 grams is equal to 0.24 kilograms. We divide our value in grams by 1000.\n\nBody 𝐴 will, therefore, have a force acting vertically downwards equal to 0.24𝑔, where 𝑔 is equal to 9.8 meters per square second. Body 𝐵 has a mass of 300 grams, and this is equal to 0.3 kilograms. Therefore, this body has a downward force of 0.3 multiplied by 𝑔.\n\nWe have a light inextensible string passing over a smooth pulley. This means that the tension throughout the string will be equal. It also means that when released, the system will travel with uniform acceleration. Body 𝐴 has a normal reaction force perpendicular to the plane. As the plane itself is rough, there will be a frictional force acting down the plane.\n\nWe will now use Newton’s second law, which states that the sum of the net forces is equal to the mass multiplied by the acceleration, to resolve parallel and perpendicular to the plane for body 𝐴 and vertically for body 𝐵. The weight of body 𝐴 is acting vertically downwards. Therefore, we need to find the components of this that are parallel and perpendicular to the plane. Once again, using our knowledge of right angle trigonometry gives us a force of 0.24𝑔 multiplied by cos 𝛼 perpendicular to the plane and 0.24𝑔 multiplied by sin 𝛼 parallel to the plane.\n\nThere are three forces acting on 𝐴 parallel to the plane: the tension force, the frictional force, and this weight component. This gives us the equation 𝑇 minus 0.24𝑔 multiplied by sin 𝛼 minus the frictional force 𝐹 𝑟 is equal to 0.24𝑎. Perpendicular to the plane, the sum of the net forces is equal to 𝑅 minus 0.24𝑔 multiplied by cos 𝛼. The body is not moving in this direction. Therefore, this is equal to zero.\n\nRearranging the equation, we see that the normal reaction force 𝑅 is equal to 0.24𝑔 multiplied by cos 𝛼. Finally, resolving vertically for body 𝐵, where downwards is the positive direction, gives us 0.3𝑔 minus 𝑇 is equal to 0.3𝑎. We can substitute in our values for sin 𝛼 and cos 𝛼. This means that the normal reaction force is equal to 1176 over 625. 0.24𝑔 multiplied by sin 𝛼 is equal to 882 over 625.\n\nOur next step is to calculate the acceleration of the system, given the fact that body 𝐵 descended 196 centimeters in three seconds. In order to do this, we will use the equations of motion, also known as the SUVAT equations. The displacement of the body was 196 centimeters. This is equal to 1.96 meters, as there are 100 centimeters in a meter. The body was released from rest, so the initial velocity is zero meters per second. We are trying to calculate the acceleration, and we are told the time is three seconds.\n\nThis means that we can use the equation 𝑠 is equal to 𝑢𝑡 plus a half 𝑎𝑡 squared. Substituting in our values gives us 1.96 is equal to zero multiplied by three plus a half multiplied by 𝑎 multiplied by three squared. The right-hand side simplifies to 4.5𝑎. Dividing both sides by 4.5 gives us 𝑎 is equal to 98 over 225. The acceleration of the system is 98 over 225 meters per square second. We can now substitute this value into our equations.\n\nWe now have two unknowns left, the tension 𝑇 and the frictional force 𝐹 𝑟. We know that the normal reaction force is 1176 over 625 newtons. By adding 𝑇 and subtracting 49 over 375 to both sides of the bottom equation, we can calculate the tension force 𝑇. This gives us a value of 𝑇 equal to 2107 over 750 newtons. We can substitute this into the top equation. Rearranging this equation gives us a frictional force 𝐹 𝑟 equal to 1617 over 1250 newtons.\n\nWe now need to calculate the coefficient of friction. And we know that the frictional force is equal to this coefficient of friction multiplied by the normal reaction force. This means that the coefficient of friction 𝜇 is equal to 𝐹 𝑟 divided by 𝑅. Typing this into the calculator gives us a coefficient of friction 𝜇 equal to eleven sixteenths.\n\nWe will now summarize the key points from this video. To solve problems involving pulleys on an inclined plane, we used Newton’s second law, 𝐹 equals 𝑚𝑎. We resolve forces vertically as well as parallel and perpendicular to the plane. If the plane is rough, we have a frictional force 𝐹 𝑟 acting against the motion, where this frictional force is equal to the coefficient of friction 𝜇 multiplied by the normal reaction force 𝑅. We can also use the equations of motion, or SUVAT equations, to calculate unknowns and help us solve problems of this type." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9025919,"math_prob":0.9978151,"size":12033,"snap":"2021-31-2021-39","text_gpt3_token_len":3173,"char_repetition_ratio":0.17723835,"word_repetition_ratio":0.17253838,"special_character_ratio":0.2201446,"punctuation_ratio":0.09832134,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998766,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T11:02:59Z\",\"WARC-Record-ID\":\"<urn:uuid:2ceda9a7-8af7-462c-9671-547189f715c5>\",\"Content-Length\":\"62718\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4334c0b-a3a6-48f0-9cd5-8d10cf0f58d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3b50125-2576-429a-b76d-f8e44d411e5f>\",\"WARC-IP-Address\":\"76.223.114.27\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/videos/907152737653/\",\"WARC-Payload-Digest\":\"sha1:KPTADODSQMTR3YIFET5FGAKMGRCPJH5N\",\"WARC-Block-Digest\":\"sha1:PVTBMSWMLVCIPC5OOVJNHLZVNKNQ37TX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153709.26_warc_CC-MAIN-20210728092200-20210728122200-00322.warc.gz\"}"}
http://msweather.org/October2016.htm
[ "Daily report for October 2016\n```Averages\\Extremes for day :01\n------------------------------------------------------------\n\nAverage temperature = 57.9°F\nAverage humidity = 96%\nAverage dewpoint = 56.6°F\nAverage barometer = 30.2 in.\nAverage windspeed = 3.0 mph\nAverage gustspeed = 5.8 mph\nAverage direction = 23° (NNE)\nRainfall for month = 0.07 in.\nRainfall for year = 24.99 in.\nRainfall for day = 0.07 in.\nMaximum rain per minute = 0.01 in. on day 01 at time 03:29\nMaximum temperature = 59.9°F on day 01 at time 19:19\nMinimum temperature = 56.5°F on day 01 at time 06:10\nMaximum humidity = 98% on day 01 at time 11:22\nMinimum humidity = 91% on day 01 at time 00:00\nMaximum dewpoint = 58.0°F on day 01 at time 17:22\nMinimum dewpoint = 55.6°F on day 01 at time 023:59\nMaximum pressure = 30.251 in. on day 01 at time 00:01\nMinimum pressure = 30.137 in. on day 01 at time 19:01\nMaximum windspeed = 6.9 mph on day 01 at time 20:59\nMaximum gust speed = 15.0 mph from 090 °( E ) on day 01 at time 06:27\nDaily wind run = 070.8miles\nMaximum heat index = 59.9°F on day 01 at time 19:19\n\nAverages\\Extremes for day :02\n------------------------------------------------------------\n\nAverage temperature = 59.4°F\nAverage humidity = 94%\nAverage dewpoint = 57.5°F\nAverage barometer = 30.1 in.\nAverage windspeed = 0.7 mph\nAverage gustspeed = 1.9 mph\nAverage direction = 69° (ENE)\nRainfall for month = 0.07 in.\nRainfall for year = 24.99 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 02 at time 23:59\nMaximum temperature = 62.1°F on day 02 at time 16:56\nMinimum temperature = 56.6°F on day 02 at time 07:30\nMaximum humidity = 97% on day 02 at time 08:27\nMinimum humidity = 89% on day 02 at time 16:59\nMaximum dewpoint = 59.7°F on day 02 at time 18:06\nMinimum dewpoint = 55.6°F on day 02 at time 6:19\nMaximum pressure = 30.146 in. on day 02 at time 00:36\nMinimum pressure = 30.029 in. on day 02 at time 23:21\nMaximum windspeed = 4.6 mph on day 02 at time 12:44\nMaximum gust speed = 9.2 mph from 068 °(ENE) on day 02 at time 01:06\nDaily wind run = 017.6miles\nMaximum heat index = 62.1°F on day 02 at time 16:56\n\nAverages\\Extremes for day :03\n------------------------------------------------------------\n\nAverage temperature = 62.0°F\nAverage humidity = 90%\nAverage dewpoint = 58.9°F\nAverage barometer = 30.0 in.\nAverage windspeed = 0.4 mph\nAverage gustspeed = 0.9 mph\nAverage direction = 311° ( NW)\nRainfall for month = 0.07 in.\nRainfall for year = 24.99 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 03 at time 23:59\nMaximum temperature = 71.3°F on day 03 at time 15:19\nMinimum temperature = 57.0°F on day 03 at time 23:23\nMaximum humidity = 98% on day 03 at time 08:56\nMinimum humidity = 61% on day 03 at time 15:11\nMaximum dewpoint = 61.7°F on day 03 at time 16:48\nMinimum dewpoint = 55.9°F on day 03 at time 23:13\nMaximum pressure = 30.140 in. on day 03 at time 23:59\nMinimum pressure = 29.985 in. on day 03 at time 05:23\nMaximum windspeed = 4.6 mph on day 03 at time 15:28\nMaximum gust speed = 5.8 mph from 315 °( NW) on day 03 at time 15:28\nDaily wind run = 008.5miles\nMaximum heat index = 75.8°F on day 03 at time 15:11\n\nAverages\\Extremes for day :04\n------------------------------------------------------------\n\nAverage temperature = 61.7°F\nAverage humidity = 81%\nAverage dewpoint = 55.8°F\nAverage barometer = 30.3 in.\nAverage windspeed = 2.0 mph\nAverage gustspeed = 4.4 mph\nAverage direction = 66° (ENE)\nRainfall for month = 0.08 in.\nRainfall for year = 25.00 in.\nRainfall for day = 0.01 in.\nMaximum rain per minute = 0.01 in. on day 04 at time 18:57\nMaximum temperature = 67.3°F on day 04 at time 15:34\nMinimum temperature = 57.0°F on day 04 at time 23:59\nMaximum humidity = 98% on day 04 at time 00:52\nMinimum humidity = 62% on day 04 at time 23:53\nMaximum dewpoint = 59.0°F on day 04 at time 00:52\nMinimum dewpoint = 44.1°F on day 04 at time 23:53\nMaximum pressure = 30.346 in. on day 04 at time 22:26\nMinimum pressure = 30.139 in. on day 04 at time 00:12\nMaximum windspeed = 6.9 mph on day 04 at time 22:26\nMaximum gust speed = 13.8 mph from 090 °( E ) on day 04 at time 22:26\nDaily wind run = 048.0miles\nMaximum heat index = 67.3°F on day 04 at time 15:34\n\nAverages\\Extremes for day :05\n------------------------------------------------------------\n\nAverage temperature = 57.4°F\nAverage humidity = 80%\nAverage dewpoint = 50.9°F\nAverage barometer = 30.3 in.\nAverage windspeed = 1.6 mph\nAverage gustspeed = 3.3 mph\nAverage direction = 73° (ENE)\nRainfall for month = 0.08 in.\nRainfall for year = 25.00 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 05 at time 23:59\nMaximum temperature = 65.9°F on day 05 at time 17:06\nMinimum temperature = 51.5°F on day 05 at time 23:01\nMaximum humidity = 97% on day 05 at time 23:59\nMinimum humidity = 58% on day 05 at time 15:24\nMaximum dewpoint = 54.2°F on day 05 at time 18:10\nMinimum dewpoint = 44.4°F on day 05 at time 0:03\nMaximum pressure = 30.372 in. on day 05 at time 10:20\nMinimum pressure = 30.273 in. on day 05 at time 17:04\nMaximum windspeed = 5.8 mph on day 05 at time 15:24\nMaximum gust speed = 13.8 mph from 113 °(ESE) on day 05 at time 08:48\nDaily wind run = 037.7miles\nMaximum heat index = 65.9°F on day 05 at time 17:06\n\nAverages\\Extremes for day :06\n------------------------------------------------------------\n\nAverage temperature = 58.6°F\nAverage humidity = 84%\nAverage dewpoint = 52.8°F\nAverage barometer = 30.3 in.\nAverage windspeed = 0.5 mph\nAverage gustspeed = 1.2 mph\nAverage direction = 103° (ESE)\nRainfall for month = 0.08 in.\nRainfall for year = 25.00 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 06 at time 23:59\nMaximum temperature = 73.2°F on day 06 at time 15:54\nMinimum temperature = 50.0°F on day 06 at time 03:44\nMaximum humidity = 98% on day 06 at time 09:06\nMinimum humidity = 40% on day 06 at time 14:56\nMaximum dewpoint = 58.4°F on day 06 at time 11:38\nMinimum dewpoint = 43.8°F on day 06 at time 14:56\nMaximum pressure = 30.339 in. on day 06 at time 09:56\nMinimum pressure = 30.274 in. on day 06 at time 17:07\nMaximum windspeed = 4.6 mph on day 06 at time 13:21\nMaximum gust speed = 6.9 mph from 045 °( NE) on day 06 at time 13:15\nDaily wind run = 011.7miles\nMaximum heat index = 77.4°F on day 06 at time 14:18\n\nAverages\\Extremes for day :07\n------------------------------------------------------------\n\nAverage temperature = 58.5°F\nAverage humidity = 86%\nAverage dewpoint = 53.7°F\nAverage barometer = 30.3 in.\nAverage windspeed = 0.7 mph\nAverage gustspeed = 1.3 mph\nAverage direction = 169° ( S )\nRainfall for month = 0.08 in.\nRainfall for year = 25.00 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 07 at time 23:59\nMaximum temperature = 72.9°F on day 07 at time 16:25\nMinimum temperature = 48.4°F on day 07 at time 07:25\nMaximum humidity = 99% on day 07 at time 09:32\nMinimum humidity = 50% on day 07 at time 16:49\nMaximum dewpoint = 61.4°F on day 07 at time 12:03\nMinimum dewpoint = 47.6°F on day 07 at time 7:25\nMaximum pressure = 30.325 in. on day 07 at time 08:56\nMinimum pressure = 30.202 in. on day 07 at time 23:58\nMaximum windspeed = 5.8 mph on day 07 at time 16:37\nMaximum gust speed = 10.4 mph from 158 °(SSE) on day 07 at time 17:14\nDaily wind run = 016.0miles\nMaximum heat index = 76.9°F on day 07 at time 16:49\n\nAverages\\Extremes for day :08\n------------------------------------------------------------\n\nAverage temperature = 61.0°F\nAverage humidity = 95%\nAverage dewpoint = 59.4°F\nAverage barometer = 30.1 in.\nAverage windspeed = 0.5 mph\nAverage gustspeed = 1.2 mph\nAverage direction = 170° ( S )\nRainfall for month = 0.31 in.\nRainfall for year = 25.23 in.\nRainfall for day = 0.23 in.\nMaximum rain per minute = 0.01 in. on day 08 at time 23:34\nMaximum temperature = 69.5°F on day 08 at time 12:09\nMinimum temperature = 50.6°F on day 08 at time 03:28\nMaximum humidity = 99% on day 08 at time 09:30\nMinimum humidity = 83% on day 08 at time 12:10\nMaximum dewpoint = 65.9°F on day 08 at time 16:14\nMinimum dewpoint = 49.8°F on day 08 at time 3:28\nMaximum pressure = 30.201 in. on day 08 at time 00:00\nMinimum pressure = 29.979 in. on day 08 at time 23:51\nMaximum windspeed = 5.8 mph on day 08 at time 13:06\nMaximum gust speed = 12.7 mph from 158 °(SSE) on day 08 at time 13:06\nDaily wind run = 011.7miles\nMaximum heat index = 69.5°F on day 08 at time 12:09\n\nAverages\\Extremes for day :09\n------------------------------------------------------------\n\nAverage temperature = 57.0°F\nAverage humidity = 89%\nAverage dewpoint = 53.7°F\nAverage barometer = 30.0 in.\nAverage windspeed = 5.0 mph\nAverage gustspeed = 9.9 mph\nAverage direction = 3° ( N )\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.99 in.\nMaximum rain per minute = 0.01 in. on day 09 at time 15:39\nMaximum temperature = 66.1°F on day 09 at time 00:48\nMinimum temperature = 52.4°F on day 09 at time 23:59\nMaximum humidity = 98% on day 09 at time 03:54\nMinimum humidity = 66% on day 09 at time 22:56\nMaximum dewpoint = 65.5°F on day 09 at time 00:48\nMinimum dewpoint = 42.2°F on day 09 at time 23:50\nMaximum pressure = 30.069 in. on day 09 at time 23:47\nMinimum pressure = 29.931 in. on day 09 at time 03:35\nMaximum windspeed = 11.5 mph on day 09 at time 23:16\nMaximum gust speed = 28.8 mph from 338 °(NNW) on day 09 at time 17:47\nDaily wind run = 120.4miles\nMaximum heat index = 66.1°F on day 09 at time 00:48\n\nAverages\\Extremes for day :10\n------------------------------------------------------------\n\nAverage temperature = 53.2°F\nAverage humidity = 58%\nAverage dewpoint = 38.4°F\nAverage barometer = 30.2 in.\nAverage windspeed = 5.1 mph\nAverage gustspeed = 10.0 mph\nAverage direction = 354° ( N )\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 10 at time 23:59\nMaximum temperature = 58.4°F on day 10 at time 15:25\nMinimum temperature = 48.5°F on day 10 at time 08:06\nMaximum humidity = 73% on day 10 at time 01:23\nMinimum humidity = 43% on day 10 at time 16:24\nMaximum dewpoint = 43.3°F on day 10 at time 00:10\nMinimum dewpoint = 34.0°F on day 10 at time 20:36\nMaximum pressure = 30.374 in. on day 10 at time 23:45\nMinimum pressure = 30.061 in. on day 10 at time 00:15\nMaximum windspeed = 13.8 mph on day 09 at time 00:34\nMaximum gust speed = 26.5 mph from 023 °(NNE) on day 10 at time 00:47\nDaily wind run = 121.7miles\nMaximum heat index = 58.4°F on day 10 at time 15:25\n\nAverages\\Extremes for day :11\n------------------------------------------------------------\n\nAverage temperature = 51.9°F\nAverage humidity = 75%\nAverage dewpoint = 44.0°F\nAverage barometer = 30.4 in.\nAverage windspeed = 1.3 mph\nAverage gustspeed = 2.5 mph\nAverage direction = 6° ( N )\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 11 at time 23:59\nMaximum temperature = 59.9°F on day 11 at time 16:48\nMinimum temperature = 43.6°F on day 11 at time 07:47\nMaximum humidity = 96% on day 11 at time 23:59\nMinimum humidity = 58% on day 11 at time 00:46\nMaximum dewpoint = 50.1°F on day 11 at time 17:55\nMinimum dewpoint = 36.7°F on day 11 at time 03:57\nMaximum pressure = 30.484 in. on day 11 at time 11:07\nMinimum pressure = 30.368 in. on day 11 at time 00:00\nMaximum windspeed = 6.9 mph on day 11 at time 12:24\nMaximum gust speed = 12.7 mph from 00 °( N ) on day 11 at time 03:56\nDaily wind run = 031.9miles\nMaximum heat index = 59.9°F on day 11 at time 16:48\n\nAverages\\Extremes for day :12\n------------------------------------------------------------\n\nAverage temperature = 53.7°F\nAverage humidity = 86%\nAverage dewpoint = 49.1°F\nAverage barometer = 30.3 in.\nAverage windspeed = 0.3 mph\nAverage gustspeed = 0.9 mph\nAverage direction = 185° ( S )\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 12 at time 23:59\nMaximum temperature = 64.1°F on day 12 at time 13:50\nMinimum temperature = 43.8°F on day 12 at time 03:09\nMaximum humidity = 98% on day 12 at time 08:40\nMinimum humidity = 64% on day 12 at time 16:59\nMaximum dewpoint = 54.0°F on day 12 at time 14:48\nMinimum dewpoint = 42.8°F on day 12 at time 3:09\nMaximum pressure = 30.424 in. on day 12 at time 00:05\nMinimum pressure = 30.209 in. on day 12 at time 23:59\nMaximum windspeed = 4.6 mph on day 12 at time 15:53\nMaximum gust speed = 6.9 mph from 180 °( S ) on day 12 at time 16:40\nDaily wind run = 008.3miles\nMaximum heat index = 64.1°F on day 12 at time 13:50\n\nAverages\\Extremes for day :13\n------------------------------------------------------------\n\nAverage temperature = 58.4°F\nAverage humidity = 87%\nAverage dewpoint = 54.3°F\nAverage barometer = 30.1 in.\nAverage windspeed = 1.4 mph\nAverage gustspeed = 2.9 mph\nAverage direction = 278° ( W )\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 13 at time 23:59\nMaximum temperature = 67.4°F on day 13 at time 15:14\nMinimum temperature = 50.2°F on day 13 at time 07:29\nMaximum humidity = 99% on day 13 at time 09:48\nMinimum humidity = 65% on day 13 at time 23:59\nMaximum dewpoint = 60.4°F on day 13 at time 10:01\nMinimum dewpoint = 43.9°F on day 13 at time 23:59\nMaximum pressure = 30.207 in. on day 13 at time 00:00\nMinimum pressure = 30.013 in. on day 13 at time 15:43\nMaximum windspeed = 9.2 mph on day 13 at time 22:46\nMaximum gust speed = 15.0 mph from 360 °( N ) on day 13 at time 23:35\nDaily wind run = 033.1miles\nMaximum heat index = 67.4°F on day 13 at time 15:14\n\nAverages\\Extremes for day :14\n------------------------------------------------------------\n\nAverage temperature = 51.2°F\nAverage humidity = 70%\nAverage dewpoint = 41.3°F\nAverage barometer = 30.3 in.\nAverage windspeed = 2.5 mph\nAverage gustspeed = 4.9 mph\nAverage direction = 359° ( N )\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 14 at time 23:59\nMaximum temperature = 60.7°F on day 14 at time 16:44\nMinimum temperature = 41.8°F on day 14 at time 23:58\nMaximum humidity = 94% on day 14 at time 23:59\nMinimum humidity = 44% on day 14 at time 15:29\nMaximum dewpoint = 45.9°F on day 14 at time 18:02\nMinimum dewpoint = 36.8°F on day 14 at time 13:59\nMaximum pressure = 30.394 in. on day 14 at time 23:38\nMinimum pressure = 30.166 in. on day 14 at time 00:05\nMaximum windspeed = 10.4 mph on day 13 at time 00:23\nMaximum gust speed = 19.6 mph from 00 °( N ) on day 14 at time 00:23\nDaily wind run = 058.9miles\nMaximum heat index = 60.7°F on day 14 at time 16:44\n\nAverages\\Extremes for day :15\n------------------------------------------------------------\n\nAverage temperature = 50.6°F\nAverage humidity = 82%\nAverage dewpoint = 45.0°F\nAverage barometer = 30.4 in.\nAverage windspeed = 0.9 mph\nAverage gustspeed = 1.8 mph\nAverage direction = 36° ( NE)\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 15 at time 23:59\nMaximum temperature = 62.3°F on day 15 at time 15:24\nMinimum temperature = 40.2°F on day 15 at time 04:01\nMaximum humidity = 98% on day 15 at time 06:27\nMinimum humidity = 60% on day 15 at time 16:52\nMaximum dewpoint = 51.7°F on day 15 at time 16:09\nMinimum dewpoint = 38.9°F on day 15 at time 4:01\nMaximum pressure = 30.462 in. on day 15 at time 09:47\nMinimum pressure = 30.324 in. on day 15 at time 23:59\nMaximum windspeed = 5.8 mph on day 15 at time 16:36\nMaximum gust speed = 11.5 mph from 158 °(SSE) on day 15 at time 12:42\nDaily wind run = 021.2miles\nMaximum heat index = 62.3°F on day 15 at time 15:24\n\nAverages\\Extremes for day :16\n------------------------------------------------------------\n\nAverage temperature = 57.8°F\nAverage humidity = 86%\nAverage dewpoint = 53.2°F\nAverage barometer = 30.2 in.\nAverage windspeed = 1.4 mph\nAverage gustspeed = 3.1 mph\nAverage direction = 212° (SSW)\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 16 at time 23:59\nMaximum temperature = 68.6°F on day 16 at time 14:51\nMinimum temperature = 45.6°F on day 16 at time 00:13\nMaximum humidity = 97% on day 16 at time 08:29\nMinimum humidity = 64% on day 16 at time 14:44\nMaximum dewpoint = 59.5°F on day 16 at time 22:31\nMinimum dewpoint = 43.8°F on day 16 at time 0:09\nMaximum pressure = 30.323 in. on day 16 at time 00:00\nMinimum pressure = 30.002 in. on day 16 at time 21:05\nMaximum windspeed = 8.1 mph on day 16 at time 11:29\nMaximum gust speed = 19.6 mph from 270 °( W ) on day 16 at time 11:48\nDaily wind run = 032.8miles\nMaximum heat index = 75.3°F on day 16 at time 14:08\n\nAverages\\Extremes for day :17\n------------------------------------------------------------\n\nAverage temperature = 67.0°F\nAverage humidity = 83%\nAverage dewpoint = 61.4°F\nAverage barometer = 29.9 in.\nAverage windspeed = 0.7 mph\nAverage gustspeed = 1.9 mph\nAverage direction = 218° ( SW)\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 17 at time 23:59\nMaximum temperature = 79.1°F on day 17 at time 14:42\nMinimum temperature = 59.8°F on day 17 at time 07:37\nMaximum humidity = 95% on day 17 at time 23:59\nMinimum humidity = 62% on day 17 at time 14:37\nMaximum dewpoint = 66.3°F on day 17 at time 14:58\nMinimum dewpoint = 57.8°F on day 17 at time 7:37\nMaximum pressure = 30.008 in. on day 17 at time 00:00\nMinimum pressure = 29.860 in. on day 17 at time 17:15\nMaximum windspeed = 3.5 mph on day 17 at time 19:32\nMaximum gust speed = 11.5 mph from 203 °(SSW) on day 17 at time 15:43\nDaily wind run = 016.1miles\nMaximum heat index = 81.0°F on day 17 at time 14:42\n\nAverages\\Extremes for day :18\n------------------------------------------------------------\n\nAverage temperature = 68.7°F\nAverage humidity = 84%\nAverage dewpoint = 63.2°F\nAverage barometer = 29.9 in.\nAverage windspeed = 2.4 mph\nAverage gustspeed = 5.3 mph\nAverage direction = 198° (SSW)\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 18 at time 23:59\nMaximum temperature = 78.1°F on day 18 at time 12:59\nMinimum temperature = 62.3°F on day 18 at time 07:39\nMaximum humidity = 96% on day 18 at time 08:02\nMinimum humidity = 62% on day 18 at time 15:35\nMaximum dewpoint = 67.3°F on day 18 at time 11:29\nMinimum dewpoint = 60.9°F on day 18 at time 7:16\nMaximum pressure = 29.957 in. on day 18 at time 09:08\nMinimum pressure = 29.842 in. on day 18 at time 18:31\nMaximum windspeed = 9.2 mph on day 18 at time 16:33\nMaximum gust speed = 21.9 mph from 158 °(SSE) on day 18 at time 19:48\nDaily wind run = 057.4miles\nMaximum heat index = 80.0°F on day 18 at time 12:59\n\nAverages\\Extremes for day :19\n------------------------------------------------------------\n\nAverage temperature = 70.7°F\nAverage humidity = 76%\nAverage dewpoint = 62.2°F\nAverage barometer = 30.0 in.\nAverage windspeed = 1.5 mph\nAverage gustspeed = 3.5 mph\nAverage direction = 263° ( W )\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 19 at time 23:59\nMaximum temperature = 82.0°F on day 19 at time 16:01\nMinimum temperature = 63.6°F on day 19 at time 23:59\nMaximum humidity = 94% on day 19 at time 07:54\nMinimum humidity = 45% on day 19 at time 18:57\nMaximum dewpoint = 67.1°F on day 19 at time 13:54\nMinimum dewpoint = 50.9°F on day 19 at time 18:47\nMaximum pressure = 30.146 in. on day 19 at time 23:50\nMinimum pressure = 29.862 in. on day 19 at time 05:00\nMaximum windspeed = 5.8 mph on day 19 at time 11:28\nMaximum gust speed = 13.8 mph from 270 °( W ) on day 19 at time 11:17\nDaily wind run = 036.2miles\nMaximum heat index = 83.9°F on day 19 at time 15:56\n\nAverages\\Extremes for day :20\n------------------------------------------------------------\n\nAverage temperature = 63.7°F\nAverage humidity = 82%\nAverage dewpoint = 57.9°F\nAverage barometer = 30.1 in.\nAverage windspeed = 1.5 mph\nAverage gustspeed = 3.3 mph\nAverage direction = 97° ( E )\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 20 at time 23:59\nMaximum temperature = 73.3°F on day 20 at time 13:44\nMinimum temperature = 58.6°F on day 20 at time 03:49\nMaximum humidity = 97% on day 20 at time 23:59\nMinimum humidity = 59% on day 20 at time 13:15\nMaximum dewpoint = 61.8°F on day 20 at time 14:09\nMinimum dewpoint = 53.7°F on day 20 at time 1:17\nMaximum pressure = 30.178 in. on day 20 at time 09:25\nMinimum pressure = 29.988 in. on day 20 at time 23:59\nMaximum windspeed = 6.9 mph on day 20 at time 14:06\nMaximum gust speed = 13.8 mph from 180 °( S ) on day 20 at time 08:51\nDaily wind run = 034.8miles\nMaximum heat index = 76.5°F on day 20 at time 13:15\n\nAverages\\Extremes for day :21\n------------------------------------------------------------\n\nAverage temperature = 68.1°F\nAverage humidity = 92%\nAverage dewpoint = 65.7°F\nAverage barometer = 29.7 in.\nAverage windspeed = 2.1 mph\nAverage gustspeed = 4.3 mph\nAverage direction = 169° ( S )\nRainfall for month = 1.30 in.\nRainfall for year = 26.22 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 21 at time 23:59\nMaximum temperature = 76.0°F on day 21 at time 11:14\nMinimum temperature = 62.6°F on day 21 at time 00:02\nMaximum humidity = 98% on day 21 at time 07:10\nMinimum humidity = 78% on day 21 at time 11:14\nMaximum dewpoint = 69.6°F on day 21 at time 12:50\nMinimum dewpoint = 61.8°F on day 21 at time 0:02\nMaximum pressure = 29.988 in. on day 21 at time 00:01\nMinimum pressure = 29.396 in. on day 21 at time 23:59\nMaximum windspeed = 12.7 mph on day 21 at time 10:17\nMaximum gust speed = 21.9 mph from 158 °(SSE) on day 21 at time 11:19\nDaily wind run = 049.9miles\nMaximum heat index = 77.1°F on day 21 at time 11:14\n\nAverages\\Extremes for day :22\n------------------------------------------------------------\n\nAverage temperature = 52.3°F\nAverage humidity = 87%\nAverage dewpoint = 48.4°F\nAverage barometer = 29.4 in.\nAverage windspeed = 4.9 mph\nAverage gustspeed = 9.7 mph\nAverage direction = 284° (WNW)\nRainfall for month = 1.76 in.\nRainfall for year = 26.68 in.\nRainfall for day = 0.46 in.\nMaximum rain per minute = 0.01 in. on day 22 at time 15:07\nMaximum temperature = 64.9°F on day 22 at time 01:49\nMinimum temperature = 45.1°F on day 22 at time 15:13\nMaximum humidity = 98% on day 22 at time 03:35\nMinimum humidity = 65% on day 22 at time 23:59\nMaximum dewpoint = 64.3°F on day 22 at time 01:49\nMinimum dewpoint = 37.9°F on day 22 at time 23:56\nMaximum pressure = 29.629 in. on day 22 at time 23:59\nMinimum pressure = 29.309 in. on day 22 at time 10:05\nMaximum windspeed = 16.1 mph on day 22 at time 19:34\nMaximum gust speed = 33.4 mph from 270 °( W ) on day 22 at time 21:04\nDaily wind run = 116.9miles\nMaximum heat index = 64.9°F on day 22 at time 01:49\n\nAverages\\Extremes for day :23\n------------------------------------------------------------\n\nAverage temperature = 52.4°F\nAverage humidity = 57%\nAverage dewpoint = 37.3°F\nAverage barometer = 29.8 in.\nAverage windspeed = 5.1 mph\nAverage gustspeed = 10.1 mph\nAverage direction = 267° ( W )\nRainfall for month = 1.76 in.\nRainfall for year = 26.68 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 23 at time 23:59\nMaximum temperature = 59.7°F on day 23 at time 16:38\nMinimum temperature = 46.9°F on day 23 at time 07:31\nMaximum humidity = 68% on day 23 at time 23:59\nMinimum humidity = 46% on day 23 at time 13:59\nMaximum dewpoint = 41.4°F on day 23 at time 14:38\nMinimum dewpoint = 33.0°F on day 23 at time 5:19\nMaximum pressure = 29.885 in. on day 23 at time 19:48\nMinimum pressure = 29.627 in. on day 23 at time 00:06\nMaximum windspeed = 12.7 mph on day 23 at time 12:11\nMaximum gust speed = 29.9 mph from 248 °(WSW) on day 23 at time 14:55\nDaily wind run = 123.4miles\nMaximum heat index = 59.7°F on day 23 at time 16:38\n\nAverages\\Extremes for day :24\n------------------------------------------------------------\n\nAverage temperature = 53.7°F\nAverage humidity = 70%\nAverage dewpoint = 43.4°F\nAverage barometer = 29.9 in.\nAverage windspeed = 3.5 mph\nAverage gustspeed = 7.1 mph\nAverage direction = 284° (WNW)\nRainfall for month = 1.91 in.\nRainfall for year = 26.83 in.\nRainfall for day = 0.15 in.\nMaximum rain per minute = 0.02 in. on day 24 at time 03:46\nMaximum temperature = 60.1°F on day 24 at time 14:15\nMinimum temperature = 47.6°F on day 24 at time 23:58\nMaximum humidity = 96% on day 24 at time 07:05\nMinimum humidity = 48% on day 24 at time 16:26\nMaximum dewpoint = 50.9°F on day 24 at time 03:42\nMinimum dewpoint = 34.3°F on day 24 at time 21:44\nMaximum pressure = 30.115 in. on day 24 at time 23:52\nMinimum pressure = 29.778 in. on day 24 at time 03:22\nMaximum windspeed = 11.5 mph on day 24 at time 18:30\nMaximum gust speed = 24.2 mph from 315 °( NW) on day 24 at time 18:29\nDaily wind run = 083.9miles\nMaximum heat index = 60.1°F on day 24 at time 14:15\n\nAverages\\Extremes for day :25\n------------------------------------------------------------\n\nAverage temperature = 47.4°F\nAverage humidity = 54%\nAverage dewpoint = 31.5°F\nAverage barometer = 30.2 in.\nAverage windspeed = 3.9 mph\nAverage gustspeed = 7.7 mph\nAverage direction = 305° ( NW)\nRainfall for month = 1.91 in.\nRainfall for year = 26.83 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 25 at time 23:59\nMaximum temperature = 51.3°F on day 25 at time 13:24\nMinimum temperature = 43.3°F on day 25 at time 23:43\nMaximum humidity = 67% on day 25 at time 04:13\nMinimum humidity = 42% on day 25 at time 16:37\nMaximum dewpoint = 36.5°F on day 25 at time 00:41\nMinimum dewpoint = 25.4°F on day 25 at time 22:31\nMaximum pressure = 30.338 in. on day 25 at time 23:52\nMinimum pressure = 30.109 in. on day 25 at time 01:05\nMaximum windspeed = 9.2 mph on day 25 at time 22:29\nMaximum gust speed = 26.5 mph from 00 °( N ) on day 25 at time 13:08\nDaily wind run = 093.3miles\nMaximum heat index = 51.3°F on day 25 at time 13:24\n\nAverages\\Extremes for day :26\n------------------------------------------------------------\n\nAverage temperature = 44.0°F\nAverage humidity = 50%\nAverage dewpoint = 26.4°F\nAverage barometer = 30.4 in.\nAverage windspeed = 3.3 mph\nAverage gustspeed = 6.7 mph\nAverage direction = 341° (NNW)\nRainfall for month = 1.91 in.\nRainfall for year = 26.83 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 26 at time 23:59\nMaximum temperature = 48.8°F on day 26 at time 15:30\nMinimum temperature = 40.6°F on day 26 at time 07:38\nMaximum humidity = 60% on day 26 at time 07:38\nMinimum humidity = 37% on day 26 at time 15:00\nMaximum dewpoint = 29.7°F on day 26 at time 12:11\nMinimum dewpoint = 23.5°F on day 26 at time 15:00\nMaximum pressure = 30.467 in. on day 26 at time 23:34\nMinimum pressure = 30.335 in. on day 26 at time 00:20\nMaximum windspeed = 9.2 mph on day 26 at time 13:29\nMaximum gust speed = 18.4 mph from 338 °(NNW) on day 26 at time 09:37\nDaily wind run = 078.5miles\nMaximum heat index = 48.8°F on day 26 at time 15:30\n\nAverages\\Extremes for day :27\n------------------------------------------------------------\n\nAverage temperature = 44.1°F\nAverage humidity = 81%\nAverage dewpoint = 38.3°F\nAverage barometer = 30.3 in.\nAverage windspeed = 1.3 mph\nAverage gustspeed = 3.1 mph\nAverage direction = 116° (ESE)\nRainfall for month = 2.76 in.\nRainfall for year = 27.68 in.\nRainfall for day = 0.85 in.\nMaximum rain per minute = 0.03 in. on day 27 at time 21:57\nMaximum temperature = 59.6°F on day 27 at time 23:48\nMinimum temperature = 39.7°F on day 27 at time 13:06\nMaximum humidity = 98% on day 27 at time 22:57\nMinimum humidity = 54% on day 27 at time 00:25\nMaximum dewpoint = 58.6°F on day 27 at time 23:40\nMinimum dewpoint = 26.1°F on day 27 at time 00:25\nMaximum pressure = 30.478 in. on day 27 at time 09:32\nMinimum pressure = 29.981 in. on day 27 at time 23:35\nMaximum windspeed = 9.2 mph on day 27 at time 22:28\nMaximum gust speed = 19.6 mph from 225 °( SW) on day 27 at time 22:32\nDaily wind run = 032.1miles\nMaximum heat index = 59.6°F on day 27 at time 23:48\n\nAverages\\Extremes for day :28\n------------------------------------------------------------\n\nAverage temperature = 47.3°F\nAverage humidity = 78%\nAverage dewpoint = 40.5°F\nAverage barometer = 30.1 in.\nAverage windspeed = 4.1 mph\nAverage gustspeed = 8.3 mph\nAverage direction = 300° (WNW)\nRainfall for month = 2.82 in.\nRainfall for year = 27.74 in.\nRainfall for day = 0.06 in.\nMaximum rain per minute = 0.01 in. on day 28 at time 03:35\nMaximum temperature = 58.7°F on day 28 at time 00:00\nMinimum temperature = 43.4°F on day 28 at time 23:46\nMaximum humidity = 97% on day 28 at time 04:03\nMinimum humidity = 58% on day 28 at time 17:31\nMaximum dewpoint = 57.4°F on day 28 at time 00:02\nMinimum dewpoint = 33.8°F on day 28 at time 17:30\nMaximum pressure = 30.200 in. on day 28 at time 22:31\nMinimum pressure = 29.971 in. on day 28 at time 01:47\nMaximum windspeed = 11.5 mph on day 28 at time 17:29\nMaximum gust speed = 24.2 mph from 360 °( N ) on day 28 at time 12:13\nDaily wind run = 097.9miles\nMaximum heat index = 58.7°F on day 28 at time 00:00\n\nAverages\\Extremes for day :29\n------------------------------------------------------------\n\nAverage temperature = 50.6°F\nAverage humidity = 74%\nAverage dewpoint = 42.4°F\nAverage barometer = 30.1 in.\nAverage windspeed = 1.9 mph\nAverage gustspeed = 4.0 mph\nAverage direction = 244° (WSW)\nRainfall for month = 2.82 in.\nRainfall for year = 27.74 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 29 at time 23:59\nMaximum temperature = 63.0°F on day 29 at time 23:57\nMinimum temperature = 36.7°F on day 29 at time 07:17\nMaximum humidity = 96% on day 29 at time 08:21\nMinimum humidity = 54% on day 29 at time 14:48\nMaximum dewpoint = 52.6°F on day 29 at time 23:52\nMinimum dewpoint = 34.9°F on day 29 at time 7:16\nMaximum pressure = 30.203 in. on day 29 at time 04:42\nMinimum pressure = 29.890 in. on day 29 at time 19:13\nMaximum windspeed = 8.1 mph on day 29 at time 13:48\nMaximum gust speed = 17.3 mph from 203 °(SSW) on day 29 at time 14:14\nDaily wind run = 044.8miles\nMaximum heat index = 63.0°F on day 29 at time 23:57\n\nAverages\\Extremes for day :30\n------------------------------------------------------------\n\nAverage temperature = 63.1°F\nAverage humidity = 77%\nAverage dewpoint = 55.6°F\nAverage barometer = 29.9 in.\nAverage windspeed = 2.1 mph\nAverage gustspeed = 4.2 mph\nAverage direction = 297° (WNW)\nRainfall for month = 3.48 in.\nRainfall for year = 28.40 in.\nRainfall for day = 0.66 in.\nMaximum rain per minute = 0.06 in. on day 30 at time 17:26\nMaximum temperature = 74.4°F on day 30 at time 15:16\nMinimum temperature = 53.9°F on day 30 at time 23:52\nMaximum humidity = 96% on day 30 at time 20:47\nMinimum humidity = 55% on day 30 at time 14:18\nMaximum dewpoint = 60.5°F on day 30 at time 15:33\nMinimum dewpoint = 49.8°F on day 30 at time 23:54\nMaximum pressure = 29.925 in. on day 30 at time 23:59\nMinimum pressure = 29.768 in. on day 30 at time 15:43\nMaximum windspeed = 8.1 mph on day 30 at time 16:34\nMaximum gust speed = 17.3 mph from 023 °(NNE) on day 30 at time 15:37\nDaily wind run = 050.2miles\nMaximum heat index = 77.2°F on day 30 at time 15:17\n\nAverages\\Extremes for day :31\n------------------------------------------------------------\n\nAverage temperature = 47.7°F\nAverage humidity = 69%\nAverage dewpoint = 37.8°F\nAverage barometer = 30.1 in.\nAverage windspeed = 3.1 mph\nAverage gustspeed = 6.3 mph\nAverage direction = 349° ( N )\nRainfall for month = 3.48 in.\nRainfall for year = 28.40 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 31 at time 23:59\nMaximum temperature = 53.9°F on day 31 at time 00:01\nMinimum temperature = 42.6°F on day 31 at time 23:16\nMaximum humidity = 87% on day 31 at time 00:30\nMinimum humidity = 52% on day 31 at time 16:30\nMaximum dewpoint = 50.1°F on day 31 at time 00:01\nMinimum dewpoint = 34.5°F on day 31 at time 19:19\nMaximum pressure = 30.305 in. on day 31 at time 23:36\nMinimum pressure = 29.924 in. on day 31 at time 00:00\nMaximum windspeed = 10.4 mph on day 31 at time 07:46\nMaximum gust speed = 19.6 mph from 00 °( N ) on day 31 at time 07:46\nDaily wind run = 074.2miles\nMaximum heat index = 53.9°F on day 31 at time 00:01\n\n---------------------------------------------------------------------------------------------\nAverages\\Extremes for the month of October 2016\n\n---------------------------------------------------------------------------------------------\nAverage temperature = 56.5°F\nAverage humidity = 79%\nAverage dewpoint = 49.6°F\nAverage barometer = 30.106 in.\nAverage windspeed = 2.2 mph\nAverage gustspeed = 4.6 mph\nAverage direction = 317° ( NW)\nRainfall for month = 3.488 in.\nRainfall for year = 28.409 in.\nMaximum rain per minute = 0.060 in on day 30 at time 17:26\nMaximum temperature = 82.0°F on day 19 at time 16:01\nMinimum temperature = 36.7°F on day 29 at time 07:17\nMaximum humidity = 99% on day 13 at time 09:48\nMinimum humidity = 37% on day 26 at time 15:00\nMaximum dewpoint = 69.6°F on day 21 at time 12:50\nMinimum dewpoint = 23.5°F on day 26 at time 15:00\nMaximum pressure = 30.48 in. on day 11 at time 11:07\nMinimum pressure = 29.31 in. on day 22 at time 10:05\nMaximum windspeed = 16.1 mph from 270°( W ) on day 22 at time 19:34\nMaximum gust speed = 33.4 mph from 293°(WNW) on day 22 at time 21:04\nMaximum heat index = 83.9°F on day 19 at time 15:56\nAvg daily max temp :65.6°F\nAvg daily min temp :49.7°F\nTotal windrun = 1639.6miles\n-----------------------------------\nDaily rain totals\n-----------------------------------\n00.07 in. on day 1\n00.01 in. on day 4\n00.23 in. on day 8\n00.99 in. on day 9\n00.46 in. on day 22\n00.15 in. on day 24\n00.85 in. on day 27\n00.06 in. on day 28\n00.66 in. on day 30\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76978236,"math_prob":0.9991998,"size":35349,"snap":"2021-43-2021-49","text_gpt3_token_len":12265,"char_repetition_ratio":0.36568114,"word_repetition_ratio":0.3788017,"special_character_ratio":0.46278536,"punctuation_ratio":0.1624044,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9762829,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T18:33:05Z\",\"WARC-Record-ID\":\"<urn:uuid:07ed8498-7ec7-4a36-b490-e9d8ea4da37f>\",\"Content-Length\":\"50562\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1510e9e9-3c36-44ba-a956-bb7394757384>\",\"WARC-Concurrent-To\":\"<urn:uuid:afd9b824-88af-48c8-8fd7-833753aee5b6>\",\"WARC-IP-Address\":\"173.236.156.225\",\"WARC-Target-URI\":\"http://msweather.org/October2016.htm\",\"WARC-Payload-Digest\":\"sha1:HZK2AP3ALZLYOTDYWTDWDKQGMIYZP3BE\",\"WARC-Block-Digest\":\"sha1:OI3LD53VYZGMTC4B3EIOETJRXGB5VLIN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363309.86_warc_CC-MAIN-20211206163944-20211206193944-00442.warc.gz\"}"}
https://discuss.pytorch.org/t/how-to-reacreate-a-single-image-with-trained-model/147628
[ "# How to reacreate a single image with trained model?\n\nHello!\n\nI have trained a CCGAN model and saved the generator and discriminator using\n\n``````torch.save(gen.state_dict(), \"GENERATOR/gen.pt\")\ntorch.save(disc.state_dict(), \"DISCRIMINATOR/disc.pt\")\n``````\n\nI now wish to test this model on a single image. (I have trained several models using several slightly different custom datasets, and I wish to see which dataset is the most fitting). How do I go about presenting the algorithm a single image in order to test the output? Is there a tutorial on this? Any tips are welcome.\n\nTank you!\n\nFrom what I understand you want to pass a single image through a model.\n\nModels usually take inputs for the `forward` method in the form of `BxCxHxW` (Batch x Channel x Height x Width).\n\nSo if you have a single image with the form `CxHxW` you need to add the `B` dimension through `unsqueeze(0)`.\n\n``````output = model(img.unsqueeze(0))\n``````\n\nHope this helps", null, "1 Like\n\nI guess I’m supposed to run the image just through the generator part, but I’m unsure how to properly load the generator model and the saved states.\n\nI’ll give it a try! Thanks for your help", null, "To save and load you can just use this.", null, "Taken from here\n\nHmm, using\n\n``````import torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport tifffile as tiff\nfrom generator import Generator\n\nmodel = Generator(features_g = 64, num_channels = 3)\n\noutput = model(img.unsqueeze(0))\n``````\n\nI get\n\n``````RuntimeError: Error(s) in loading state_dict for Generator:\nUnexpected key(s) in state_dict: \"down1.model.0.weight\",\n\"down2.model.0.weight\", \"down2.model.1.weight\", \"down2.model.1.bias\", \"down2.model.1.running_mean\", \"down2.model.1.running_var\",...\n``````\n\nAny idea?\n\nCan you print the state dict that your model actually has and the one that is being loaded to see what is expected?\n\n``````for param in torch.load('gen.pt'):\nprint(param)\n``````\n``````for param in model.state_dict():\nprint(param)\n``````\n\nAlso, you should have the image inside a tensor before passing it to the model.\n\n1 Like\n\nWill keep in mind about putting the image in tensor.\n\n``````for param in torch.load('gen.pt'):\nprint(param)\n``````\n\nprints out\n\n``````down1.model.0.weight\ndown2.model.0.weight\ndown2.model.1.weight\ndown2.model.1.bias\ndown2.model.1.running_mean\ndown2.model.1.running_var\ndown2.model.1.num_batches_tracked\ndown3.model.0.weight\ndown3.model.1.weight\ndown3.model.1.bias\ndown3.model.1.running_mean\ndown3.model.1.running_var\ndown3.model.1.num_batches_tracked\ndown4.model.0.weight\ndown4.model.1.weight\ndown4.model.1.bias\ndown4.model.1.running_mean\ndown4.model.1.running_var\ndown4.model.1.num_batches_tracked\ndown5.model.0.weight\ndown5.model.1.weight\ndown5.model.1.bias\n...\n``````\n\nsame keys as in the error.\n\nCould you also post the output for the model.state_dict?\n\nHuh, there’s nothing in the print. I don’t understand…\n\nThe reason it was empty is because I didn’t include in the model `self.build()` in `__init__`.\n\nDo the parameters now match, or are they still different?\n\nEverything seems in order now. Hopefully the generator works properly. Thanks", null, "1 Like" ]
[ null, "https://discuss.pytorch.org/images/emoji/apple/slight_smile.png", null, "https://discuss.pytorch.org/images/emoji/apple/smiley.png", null, "https://discuss.pytorch.org/uploads/default/original/3X/c/1/c144ba8a33ae52dde84c55fedecba9fecf8d3949.png", null, "https://discuss.pytorch.org/images/emoji/apple/smiley.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8003423,"math_prob":0.792997,"size":2974,"snap":"2022-40-2023-06","text_gpt3_token_len":763,"char_repetition_ratio":0.16565657,"word_repetition_ratio":0.005063291,"special_character_ratio":0.2471419,"punctuation_ratio":0.23054332,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95995337,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T17:33:29Z\",\"WARC-Record-ID\":\"<urn:uuid:4c156d13-a4cb-49bb-bcae-49b399f41bc7>\",\"Content-Length\":\"41424\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac7ec041-b9a2-4359-bdb6-4cc572ab495b>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f06a2f1-b8b0-4d4e-9090-07f7d9aa3d7a>\",\"WARC-IP-Address\":\"159.203.145.104\",\"WARC-Target-URI\":\"https://discuss.pytorch.org/t/how-to-reacreate-a-single-image-with-trained-model/147628\",\"WARC-Payload-Digest\":\"sha1:IUDFTTOMSS777RR3UXUUJGVXPKFKB4UU\",\"WARC-Block-Digest\":\"sha1:6PEFRIUMIGSG2DE24KXIDIZWJPMYT2O2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499826.71_warc_CC-MAIN-20230130165437-20230130195437-00649.warc.gz\"}"}
https://www.onlinemathlearning.com/pythagorean-theorem-worksheets.html
[ "# Pythagorean Theorem Worksheet and Solutions\n\nRelated Topics & Worksheets:\nPythagorean Theorem\n\nObjective: I know how to use the Pythagorean theorem to find the length of a missing side of a right triangle.\n\nIf we take the length of the hypotenuse to be c and the length of the legs to be a and b then the Pythagorean theorem tells us that:\n\nc2 = a2 + b2", null, "Read the lesson on Pythagorean theorem for more information and examples.\n\nFill in all the gaps, then press \"Check\" to check your answers. Use the \"Hint\" button to get a free letter if an answer is giving you trouble. You can also click on the \"[?]\" button to get a clue. Note that you will lose points if you ask for hints or clues!\nFor each of the following, find the length of the unknown side. (Refer to the above triangle)\nRound to the nearest hundredths when necessary.\n\na = 9, b = 12, c =\n\na = 15, b = 8, c =\n\nb = 12, c = 13, a =\n\na = 7, c = 25, b =\n\na = 40, c = 41, b =\n\na = 21, b = 20, c =\n\na = 35, c = 37, b =\n\na = 11, c = 61, b =\n\na = 28, c = 53, b =\n\na = 33, b = 56, c =\n\na = 6.7, b = 5.5, c =\n\nc = 9.4, a = 4.6, b =\n\nTry the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.", null, "We hope that the free math worksheets have been helpful. We encourage parents and teachers to select the topics according to the needs of the child. For more difficult questions, the child may be encouraged to work out the problem on a piece of paper before entering the solution. We hope that the kids will also love the fun stuff and puzzles." ]
[ null, "https://www.onlinemathlearning.com/objects/default_image.gif", null, "https://www.onlinemathlearning.com/objects/default_image.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8729015,"math_prob":0.99760944,"size":951,"snap":"2022-40-2023-06","text_gpt3_token_len":200,"char_repetition_ratio":0.11087645,"word_repetition_ratio":0.0,"special_character_ratio":0.19768664,"punctuation_ratio":0.07777778,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991389,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T12:04:19Z\",\"WARC-Record-ID\":\"<urn:uuid:e7781e86-afe3-4f9b-969c-85822f023243>\",\"Content-Length\":\"54519\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c70a097d-4308-4c6e-9f54-a43f40d6ae8b>\",\"WARC-Concurrent-To\":\"<urn:uuid:907bbef7-25e7-4099-9bab-e47751e1af32>\",\"WARC-IP-Address\":\"173.247.219.45\",\"WARC-Target-URI\":\"https://www.onlinemathlearning.com/pythagorean-theorem-worksheets.html\",\"WARC-Payload-Digest\":\"sha1:LLLZGSLXOSJRUQ3U73AFMAEIF57YHMUM\",\"WARC-Block-Digest\":\"sha1:5VKCAEBA3IFMHSBOPPZHLD23I7JVJ2PZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337803.86_warc_CC-MAIN-20221006092601-20221006122601-00134.warc.gz\"}"}
https://es.mathworks.com/help/driving/ref/ctmeasjac.html
[ "# ctmeasjac\n\nJacobian of measurement function for constant turn-rate motion\n\n## Syntax\n\n``measurementjac = ctmeasjac(state)``\n``measurementjac = ctmeasjac(state,frame)``\n``measurementjac = ctmeasjac(state,frame,sensorpos)``\n``measurementjac = ctmeasjac(state,frame,sensorpos,sensorvel)``\n``measurementjac = ctmeasjac(state,frame,sensorpos,sensorvel,laxes)``\n``measurementjac = ctmeasjac(state,measurementParameters)``\n\n## Description\n\nexample\n\n````measurementjac = ctmeasjac(state)` returns the measurement Jacobian, `measurementjac`, for a constant turn-rate Kalman filter motion model in rectangular coordinates. `state` specifies the current state of the track.```\n\nexample\n\n````measurementjac = ctmeasjac(state,frame)` also specifies the measurement coordinate system, `frame`.```\n\nexample\n\n````measurementjac = ctmeasjac(state,frame,sensorpos)` also specifies the sensor position, `sensorpos`.```\n````measurementjac = ctmeasjac(state,frame,sensorpos,sensorvel)` also specifies the sensor velocity, `sensorvel`.```\n````measurementjac = ctmeasjac(state,frame,sensorpos,sensorvel,laxes)` also specifies the local sensor axes orientation, `laxes`.```\n\nexample\n\n````measurementjac = ctmeasjac(state,measurementParameters)` specifies the measurement parameters, `measurementParameters`.```\n\n## Examples\n\ncollapse all\n\nDefine the state of an object in 2-D constant turn-rate motion. The state is the position and velocity in each dimension, and the turn rate. Construct the measurement Jacobian in rectangular coordinates.\n\n```state = [1;10;2;20;5]; jacobian = ctmeasjac(state)```\n```jacobian = 3×5 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ```\n\nDefine the state of an object in 2-D constant turn-rate motion. The state is the position and velocity in each dimension, and the turn rate. Compute the measurement Jacobian with respect to spherical coordinates.\n\n```state = [1;10;2;20;5]; measurementjac = ctmeasjac(state,'spherical')```\n```measurementjac = 4×5 -22.9183 0 11.4592 0 0 0 0 0 0 0 0.4472 0 0.8944 0 0 0.0000 0.4472 0.0000 0.8944 0 ```\n\nDefine the state of an object in 2-D constant turn-rate motion. The state is the position and velocity in each dimension, and the turn rate. Compute the measurement Jacobian with respect to spherical coordinates centered at `[5;-20;0]`.\n\n```state = [1;10;2;20;5]; sensorpos = [5;-20;0]; measurementjac = ctmeasjac(state,'spherical',sensorpos)```\n```measurementjac = 4×5 -2.5210 0 -0.4584 0 0 0 0 0 0 0 -0.1789 0 0.9839 0 0 0.5903 -0.1789 0.1073 0.9839 0 ```\n\nDefine the state of an object in 2-D constant turn-rate motion. The state is the position and velocity in each dimension, and the turn rate. Compute the measurement Jacobian with respect to spherical coordinates centered at `[25;-40;0]`.\n\n```state2d = [1;10;2;20;5]; sensorpos = [25,-40,0].'; frame = 'spherical'; sensorvel = [0;5;0]; laxes = eye(3); measurementjac = ctmeasjac(state2d,frame,sensorpos,sensorvel,laxes)```\n```measurementjac = 4×5 -1.0284 0 -0.5876 0 0 0 0 0 0 0 -0.4961 0 0.8682 0 0 0.2894 -0.4961 0.1654 0.8682 0 ```\n\nPut the measurement parameters in a structure and use the alternative syntax.\n\n```measparm = struct('Frame',frame,'OriginPosition',sensorpos,'OriginVelocity',sensorvel, ... 'Orientation',laxes); measurementjac = ctmeasjac(state2d,measparm)```\n```measurementjac = 4×5 -1.0284 0 -0.5876 0 0 0 0 0 0 0 -0.4961 0 0.8682 0 0 0.2894 -0.4961 0.1654 0.8682 0 ```\n\n## Input Arguments\n\ncollapse all\n\nState vector for a constant turn-rate motion model in two or three spatial dimensions, specified as a real-valued vector or matrix.\n\n• When specified as a 5-element vector, the state vector describes 2-D motion in the x-y plane. You can specify the state vector as a row or column vector. The components of the state vector are `[x;vx;y;vy;omega]` where `x` represents the x-coordinate and `vx` represents the velocity in the x-direction. `y` represents the y-coordinate and `vy` represents the velocity in the y-direction. `omega` represents the turn rate.\n\nWhen specified as a 5-by-N matrix, each column represents a different state vector N represents the number of states.\n\n• When specified as a 7-element vector, the state vector describes 3-D motion. You can specify the state vector as a row or column vector. The components of the state vector are `[x;vx;y;vy;omega;z;vz]` where `x` represents the x-coordinate and `vx` represents the velocity in the x-direction. `y` represents the y-coordinate and `vy` represents the velocity in the y-direction. `omega` represents the turn rate. `z` represents the z-coordinate and `vz` represents the velocity in the z-direction.\n\nWhen specified as a 7-by-N matrix, each column represents a different state vector. N represents the number of states.\n\nPosition coordinates are in meters. Velocity coordinates are in meters/second. Turn rate is in degrees/second.\n\nExample: `[5;0.1;4;-0.2;0.01]`\n\nData Types: `double`\n\nMeasurement frame, specified as `'rectangular'` or `'spherical'`. When the frame is `'rectangular'`, a measurement consists of the x, y, and z Cartesian coordinates of the tracked object. When specified as `'spherical'`, a measurement consists of the azimuth, elevation, range, and range rate of the tracked object.\n\nData Types: `char`\n\nSensor position with respect to the global coordinate system, specified as a real-valued 3-by-1 column vector. Units are in meters.\n\nData Types: `double`\n\nSensor velocity with respect to the global coordinate system, specified as a real-valued 3-by-1 column vector. Units are in meters/second.\n\nData Types: `double`\n\nLocal sensor coordinate axes, specified as a 3-by-3 orthogonal matrix. Each column specifies the direction of the local x-, y-, and z-axes, respectively, with respect to the global coordinate system.\n\nData Types: `double`\n\nMeasurement parameters, specified as a structure or an array of structures. The fields of the structure are:\n\nFieldDescriptionExample\n`Frame`\n\nFrame used to report measurements, specified as one of these values:\n\n• `'rectangular'` — Detections are reported in rectangular coordinates.\n\n• `'spherical'` — Detections are reported in spherical coordinates.\n\n`'spherical'`\n`OriginPosition`Position offset of the origin of the frame relative to the parent frame, specified as an `[x y z]` real-valued vector.`[0 0 0]`\n`OriginVelocity`Velocity offset of the origin of the frame relative to the parent frame, specified as a `[vx vy vz]` real-valued vector.`[0 0 0]`\n`Orientation`Frame rotation matrix, specified as a 3-by-3 real-valued orthonormal matrix.`[1 0 0; 0 1 0; 0 0 1]`\n`HasAzimuth`Logical scalar indicating if azimuth is included in the measurement.`1`\n`HasElevation`Logical scalar indicating if elevation is included in the measurement. For measurements reported in a rectangular frame, and if `HasElevation` is false, the reported measurements assume 0 degrees of elevation.`1`\n`HasRange`Logical scalar indicating if range is included in the measurement.`1`\n`HasVelocity`Logical scalar indicating if the reported detections include velocity measurements. For measurements reported in the rectangular frame, if `HasVelocity` is false, the measurements are reported as `[x y z]`. If `HasVelocity` is `true`, measurements are reported as `[x y z vx vy vz]`.`1`\n`IsParentToChild`Logical scalar indicating if `Orientation` performs a frame rotation from the parent coordinate frame to the child coordinate frame. When `IsParentToChild` is `false`, then `Orientation` performs a frame rotation from the child coordinate frame to the parent coordinate frame.`0`\n\nData Types: `struct`\n\n## Output Arguments\n\ncollapse all\n\nMeasurement Jacobian, returned as a real-valued 3-by-5 or 4-by-5 matrix. The row dimension and interpretation depend on value of the `frame` argument.\n\nFrameMeasurement Jacobian\n`'rectangular'`Jacobian of the measurements `[x;y;z]` with respect to the state vector. The measurement vector is with respect to the local coordinate system. Coordinates are in meters.\n`'spherical'`Jacobian of the measurement vector `[az;el;r;rr]` with respect to the state vector. Measurement vector components specify the azimuth angle, elevation angle, range, and range rate of the object with respect to the local sensor coordinate system. Angle units are in degrees. Range units are in meters and range rate units are in meters/second.\n\ncollapse all\n\n### Azimuth and Elevation Angle Definitions\n\nDefine the azimuth and elevation angles used in Automated Driving Toolbox™.\n\nThe azimuth angle of a vector is the angle between the x-axis and its orthogonal projection onto the xy plane. The angle is positive in going from the x axis toward the y axis. Azimuth angles lie between –180 and 180 degrees. The elevation angle is the angle between the vector and its orthogonal projection onto the xy-plane. The angle is positive when going toward the positive z-axis from the xy plane.", null, "" ]
[ null, "https://es.mathworks.com/help/driving/ref/azel.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5857446,"math_prob":0.97310287,"size":829,"snap":"2020-34-2020-40","text_gpt3_token_len":218,"char_repetition_ratio":0.21090908,"word_repetition_ratio":0.04040404,"special_character_ratio":0.17852835,"punctuation_ratio":0.17272727,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99213886,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T06:50:03Z\",\"WARC-Record-ID\":\"<urn:uuid:2ecdf8fa-0425-45b1-95b3-8c0f0f4b42b1>\",\"Content-Length\":\"115006\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab94e2a1-d9c5-463c-bca4-06a3e8e392a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3c27a7b-455f-4382-ab15-d776bfc4cc91>\",\"WARC-IP-Address\":\"23.66.56.59\",\"WARC-Target-URI\":\"https://es.mathworks.com/help/driving/ref/ctmeasjac.html\",\"WARC-Payload-Digest\":\"sha1:ZVTJVYJEN7P623J5VEXXDLP2MYRQ3BEN\",\"WARC-Block-Digest\":\"sha1:AQ2VCZZOAGILZEXNWWWRIWSLB4PFQLRD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738878.11_warc_CC-MAIN-20200812053726-20200812083726-00480.warc.gz\"}"}
http://gm-machinerie.com/zsvcz7zmk/cost-function-coursera.html
[ "", null, "", null, "Which minimum you find with gradient descent depends on the initialization. ^ 2; function J = computeCost (X, y, theta) %COMPUTECOST Compute cost for linear regression % J = COMPUTECOST(X, y, theta) computes the cost of using theta as the % parameter for linear regression to fit the data points in X and y % Initialize some useful values m = length (y); % number of training examples % You need to return the following variables correctly J = 0; % ===== YOUR CODE HERE ===== % Instructions: Compute the cost of a particular choice of theta % You should set J to the cost. 2. 5]', X, y)); I am a mentor for the Coursera \"Machine Learning\" course. In contrast, the cost function, J, that's a function of the parameter, theta one, which controls the slope of the straight line. This is typically expressed as a difference or distance between the predicted value and the actual value. It turns out that these squared error cost function is a reasonable choice and works well for problems for most regression programs. In actuality, cost should be 0. %COMPUTECOST Compute cost for linear regression. I see that the Style cost function uses the Style (or Gram) matrices and computes the matrices in the style image as well as the generated image (I’m talking here of Neural Style Transfer) My question is why is the same approach not followed for the content and generated image so as to compute a content matrix (just like the style matrix) And then going to add that to a style cost function which is now a function of S,G and what this does is it measures how similar is the style of the image G to the style of the image S. Posted by 15 hours ago. Some programming assignments count toward your final course grade, while others are just for practice. Now you will implement code to compute the cost function and gradient for regularized logistic regression. We also don’t have to do feature scaling or find alpha when doing the normal method. The intuition is pretty simple if we look at the function graphs. Let's start by defining the content cost component. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. % parameter for linear regression to fit the data points in X and y. 본래 개인적으로 정리 하는 것이 목적었어서 강의내용을 모두 포함하지는 않으며, 강의에  function out = output(partId, auxstring) out = sprintf('%0. The cost for any example x (i) is always ≥ 0 since it is the negative log of a quantity less than one. x is a vector, X is a matrix where each row is one vector x transposed. Cost function. Dec 15, 2014 · Machine Learning Tutorial Python - 4: Gradient Descent and Cost Function - Duration: 28:26. Its cost function is given by Its cost function is given by Now, once the cost function is known, the next step is to minimize it using one of the optimization algorithms available, e. For example  Video created by Stanford University for the course \"Machine Learning\". Programming assignments include both assignment instructions and assignment parts. Coursera ML 机器学习 ; 3. com phone= +251943667727 (telegram, viber, whats up) I'm in the beginnings of following along with the Coursera machine learning course, and I just did univariate linear regression. Teams. You should return the partial derivatives of % the cost function with respect to Theta1 and Theta2 in Theta1_grad and % Theta2_grad, respectively. Nov 27, 2017 · In ML, cost functions are used to estimate how badly models are performing. Here's the video that explains how to go about it Cost Function We can measure the accuracy of our hypothesis function by using a cost function. Since the goal is now not to minimize the distance from a predicted value, but rather to minimize the distance between the output by the hypothesis and y (0 or 1). Composing the scalar values into a given sum over each example does not change this, and you never combine one example's values with another in this sum. save. Due to the use of the sigmoid function, the cost function has to be adapted accordingly by using the logarithm. r. Evaluating logistic regression Video created by Stanford University for the course \"Machine Learning\". % cost function computation is correct by verifying the cost % computed in ex4. 5 and if x = 1, the theta value will be h(1) = 0+ 0. %COSTFUNCTION Compute cost and gradient for logistic regression. As with the first course in this series, you’ll have an opportunity to create your own Go applications so you can practice what you’re learning. We define the cost function: J(θ) = 1 2 Xm i=1 (hθ(x(i))−y(i))2. There are other cost functions that will work pretty well. . coursera ex1 computing the cost ; 5. Cost function값이 작을수록 x값이 주어졌을 때 h θ (X)의 값이 y와 근접해 진다. train. Yes No Question 21 point; Question 2 A dense layer applies a linear transformation to its input Yes No Question 31 point; Question 3 Feb 23, 2018 · Cost function. In TF2+ minimize function is not present and one needs to implement gradient descent at a much lower To avoid non-convexity of cost function, instead of the squared difference function linear regression used, logistic regression used a cross-entropy style cost function. Consider the problem of predicting how well a student does in her second year of college/university, given how well she did in her first year. It might be a sum of loss functions over your training set plus some model complexity penalty (regularization). costs = (X * theta - y) . Costfunction J(ɵ) is. AdamOptimizer(learning_rate = learning_rate). 0 has a faster convergence. The latter is a lot more easier to use, compose with other functions, make parallel, etc. Feb 18, 2020 · Coursera have introduced a new feature in 2020 called Coursera Plus. Now you will implement the cost function and gradient for logistic regression. to the parameters. Now you can run ex4. # define a function that computes the cost function J() def costJ (X, y, theta): '''where X is a pandas dataframe of input features, plus a column of ones to accommodate theta 0; y is a vector that we are trying to predict using these features, and theta is an array of the parameters''' m = len (y) hypothesis = X. 포스트에 사용된 이미지 중 많은 내용은 동영상 강의 자료의 이미지 캡쳐임을 밝힙니다. In this case, the event we are finding the cost of is the difference between estimated values, or the difference between the hypothesis and the real values — the actual data we are trying to fit a line to. This problem is unique in the sense that it's ok to miss few anomalies. Q&A for Work. coursera. each parameter in theta (grad). 직관적으로 쉽게 생각해보기 위해서 θ 0 의 값을 0으로 놓아 보자. When sometimes called the squared error cost function and it turns out that why do we take the squares of the erros. You won't get a Course Certificate if you choose this option. This takes an average difference of all the results of the hypothesis with inputs from x's and the actual output y's. Feb 23, 2018 · Adapted cost function and gradient descent. Nov 28, 2019 · Cost Function Intuition I - Coursera Machine Learning Darren Kynaston. 98MB/s: Worst Time : 4 days,18 hours For neural networks our cost function is a generalization of this equation above, so instead of one output we generate k outputs. Find answers to your questions about courses, Specializations, Verified Certificates and using Coursera. com because it is more of a \"virtual\" report that chronicles my experiences going through the content of an exciting new learning resource designed to get budding AI technologists jump started into the field of Deep Learning. Then we'll define the cost function J of G on the previous slide. t. % % Hint: You can implement this around Machine Learning Week 1 Quiz 2 (Linear Regression with One Variable) Stanford Coursera. You may be on a Specialization home page. Complete the code in costFunction. m to draw the plot; modify the file and fi ll in the following code: plot(x, y, 'rx ', 'MarkerSize ', 10 ); ylabel( 'Profit in $10 , 000 s'); xlabel( 'Population of City in 10 , 000 s'); Cost at initial theta (zeros): 0. share. Using computeCost(X2,y2,theta2) gives 65591548106. I'm now going to define something called the cost function, which measures how well you're doing an entire training set. m to return the cost and gradient. m to implement the cost function and gradient descent for linear regression with multiple variables. 4 Regularized cost function. May 04, 2018 · A cost function is defined as: …a function that maps an event or values of one or more variables onto a real number intuitively representing some “cost” associated with the event. So, this cost function is also called the squared error function. . Technically, everything we have so far is enough for optimization of the cost function above. We discuss the Video created by deeplearning. Where [texi]h_\\theta(x)[texi] is the output, the prediction, or yet the probability that [texi]y = 1[texi]. 1. the training examples we have. Completion of a paid course or specialization will earn you a certificate. So in figure2 corresponding to x = 1 and above theta values, h(x) is plotted as 0. 2):Model representation, Cost function” is published by Pandora123. How do we minimize this functionTake the partial derivative of J(θ) with respect θ j and set to 0 for every j; Do that and solve for θ 0 to θ n; This would give the values of θ which minimize J(θ) If you work through the calculus and the solution, the derivation is pretty complex The cost function evaluates the quality of our model by calculating the error between our model's prediction for a data point, using the model parameters, and the actual data point. Jun 07, 2018 · Taking derivative of our cost function First lets move the minus sign on the left of the brackets and distribute it inside the brackets, so we get: distribute minus sign Apr 10, 2016 · 回目錄:Coursera章節. Sales tax will be listed on your checkout page. pdf from ADFADFA ADFSFDAS at Highline Community College. naveen1996 ,. To formalize this, we will define a function that measures, for each value of the θ’s, how close the h(x(i))’s are to the corresponding y(i)’s. You can see how much a course costs on the course home page. Basically what I discovered, is in the cost function equation we have theta' * x. how is that there is X(i) after equation in J=1 but not J=0. Nov 13, 2019 · The cost function for a neural network is non-convex, so it may have multiple minima. Jan 14, 2019 · Image 16: Neural Network cost function. View Cost Function _ Coursera. Follow 816 views (last 30 days) Muhammad Kundi on 22 Jun 2019. Logistic Regression Model Cost function for one variable hypothesis. Re quiz in week 1, video 3, Cost Model, the answer doesn't make sense. Apart from all the . Whether or not you have seen it previously, let’s keep In almost all but the most trivial situations (or when you really need to do inference in batches for some reason), making a function that operates on an list of things is worse than making a function that operates on a single item. To let the cost function be convex for gradient descent Aug 05, 2014 · Cost Function. 이 포스트는 Coursera Machine Learning 강의(Andrew Ng 교수)를 요약 정리한 것 입니다. We are experiencing high volumes of learner support inquiries right now, so we are slower than usual to respond. gradient descent, conjugate gradient, BFGS or L-BFGS. Mar 06, 2020 · Python is an in-demand skill. Step 3: Cost Function (non-regularized) Compute the unregularized cost according to ex4. Assignment instructions: Machine Learning Week 3 Quiz 2 (Regularization) Stanford Coursera. source: coursera. m % % Part 2: Implement the backpropagation algorithm to compute the gradients % Theta1_grad and Theta2_grad. This is the cost function posted in the question above, for reference: Sep 28, 2019 · In the given figure, the cost function has been plotted against and , as shown in ‘Plot 2’. the intermediate values of the cost function Linear regression - implementation (cost function) A cost function lets us figure out how to fit the best straight line to our data Choosing values for θ i (parameters)$y_i$and$\\text{log}(a_i)$in the cost function are scalar values. 5 -0. Course Home: Coursera Machine Learning Cost function 앞 시간에 우리는 Neural Network… HiI have a machine learning problem in which if we detect a anomaly in time series data, then we have to be utmost sure that it is a anomaly. As an example, in ex2 the assignment says \"Call your costFunction function using the optimal parameters of θ. We can substitute our actual cost function and our actual hypothesis function and modify the equation to : Link to Coursera Dec 06, 2016 · . The contour plot for the same cost function is given in ‘Plot 1’. minimize(cost) function. Step 4: Cost Regularization Apr 12, 2019 · Week 4 lecture note of Coursera - Convolutional Neural Networks from deeplearning. However, the parameters in neural networks are a little bit more sophisticated than logistic regression. The good news is that the procedure is 99% identical to what we did for linear regression. This gives you unlimited access to 3000+ courses, Specializations, and Professional Certificates for$499 per year. So the cost function J which is applied to your parameters W and B is going to be the average with one over the m of the sum of the loss function applied to each of the training examples and turn. Also, notice that while alpha=1. 9/4/2017 Cost Function - Intuition II | Coursera 1/3 Back to Week 1 Lessons Prev Next Cost Function - Intuition II A contour plot is a graph that contains many contour lines. Apr 12, 2019 · Week 4 lecture note of Coursera - Convolutional Neural Networks from deeplearning. 693147 Gradient at initial theta (zeros): -0. Recall that the regularized cost function in logistic regression is: The course may offer \"Full Course, No Certificate\" instead. Other courses are part of Specializations, which means they are available through subscription payments function [J, grad] = costFunction (theta, X, y) %COSTFUNCTION Compute cost and gradient for logistic regression % J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the % parameter for logistic regression and the gradient of the cost % w. 3 is the largest learning rate, alpha=1. Andrew Ng uses those definitions across his courses Logistic Regression Cost Function - deeplearning. MACHINE LEARNING COURSERA. If you’ve seen linear regression before, you may recognize this as the familiar least-squares cost function that gives rise to the ordinary least squares regression model. 59kB/s: Best Time : 0 minutes, 37 seconds: Best Speed : 42. 0 ⋮ Vote. Remember that this is the overall cost function of the neural style transfer algorithm. May 08, 2020 · Specializations, which combine specific Coursera courses to master an area, cost between $250 and$500 and last from 4-6 months. 3 Cost function and gradient. @rasen58 If anyone still cares about this, I had the same issue when trying to implement this. Stanford机器学习课程(Andrew Ng) Week 1 Model and Cost Function --- 第二节 Cost Function ; 6. Ng在coursera上的机器学习公开课——zai总结(2)_Octave Tutorial ; 7. pdf (top of Page 5). 383770. ai for the course \"Neural Networks and Deep Learning\". Your plot of the cost function for different learning rates should look something like this: Notice that for a small alpha like 0. Cost function is usually more general. % % Part 3: Implement regularization with the cost function and gradients. To let the cost function be convex for gradient descent The normal equation is another way of minimizing the cost function (in another words, an alternative to gradient descent). This makes sense — our initial data is a straight line with a slope of 1 (the orange line in Some courses on Coursera are offered for a one-time payment that lasts for 180 days. For example, we might use logistic regression to classify an email as spam or not spam. Now in lesson 2, we start to introduce models that have a number of different input features (multivariate). When we do logistic regression, we @Temitope Israel the function h(x) = theta0 + theta1*x. 2 Cost function and gradient. Week 1. This cost function is convex, and thus friendly to gradient descent, for gradient descent methods are guaranteed to obtain the global optima. 10 key roles Hotel Management institutions play in the Cost Function. ai Cost function is defined using a content cost function and style cost May 08, 2020 · (view website) Specializations, which combine specific Coursera courses to master an area, cost between $250 and$500 and last from 4-6 months. If you get a vector of cost values, you can sum that vector to get the cost. 3 — Linear Regression With One Variable | Cost Function Intuition #1 | Andrew Ng - Duration: 11:10. Dec 06, 2018 · Next is to test if our previous functions, computeCost(X, y, theta) and gradientDescent(X, y, theta, alpha, num_iters) work with multiple features input. Before building this model, recall that our objective is to minimize the cost function in regularized logistic regression: View Cost Function _ Coursera. Continue your exploration of the Go programming language as you learn about functions, methods, and interfaces. Linear regression predicts a real-valued output based on an input value. thanks in advance! Vectorized logistic regression with regularization using gradient descent for the Coursera course Machine Learning. codebasics 158,050 views Mar 22, 2017 · Lecture 2. 5f ', costFunction([0. So, this is a 3-D surface plot, where the axes are labeled theta zero and theta one. Only courses offer the audit option. The main difference is that now is computed with the forward propagation algorithm. Sections of a programming assignment. Cost should be a scalar value. This option lets you see all course materials, submit required assessments, and get a final grade. So each element of $y$ only interacts with its matching element in $a$, which is basically the definition of element-wise. If you delete Coursera acc, you'll get nice Goodbye! 12. ' Of course, you can use any names you'd like for the arguments and the output. The cost function of the neural style transfer algorithm had a content cost component and a style cost component. However, as our hypothesis approaches 0, the cost will be larger and larger. And this is the cost of a single example that we worked out earlier. “Machine Learning學習日記 — Coursera篇 (Week 1. 机器学习-监督学习-cost function ; 2. 9/4/2017 Cost Function | Coursera Back to Week 1 Prev Lessons Next Cost Function We can measure the accuracy of 9/4/2017 Cost Function - Intuition II | Coursera 1/3 Back to Week 1 Lessons Prev Next Cost Function - Intuition II A contour plot is a graph that contains many contour lines. Coursera Ng Deep Learning Specialization Notebook This is the first course of the Deep Learning Specialization. To give a context, Many a times we have limited resources to tackle the anomaly in real Apr 10, 2016 · 回目錄:Coursera章節. The cost function for neural networks with regularization is given by Official Coursera Help Center. For example, we might use logistic regression to classify an email as spam or not May 04, 2018 · Remember a cost function maps event or values of one or more variables onto a real number. Learn to set up a machine learning problem with a neural network  function [J, grad] = costFunction(theta, X, y). Let's plot these functions and try to understand them both better. We generally square the difference before summation to avoid zero. Try checking a course page for a course in the Cost function Cost Function Using the housing price example again, we remember that our hypothesis function will predict the price of the house (y) based on the size in squared feet (x). Next, the script calls the plotData function to create a scatter plot of the data. ai Cost function is defined using a content cost function and style cost Jan 14, 2019 · Image 20: Cost function in case of one output node. 【领取】coursera机器学习 ; 4. You should see that the cost is about 0. And, in fact, depending on your training set, you might get a cost function that maybe looks something like this. You should complete the code in computeCostMulti. Based on the figure, choose the correct options (check all that apply). Loading Unsubscribe from Darren Kynaston? Cancel Unsubscribe. Your job is to complete plotData. Udacity has recently changed its pricing model for the Machine Learning Nanodegree. % % Hint: We recommend implementing backpropagation using a for-loop % over the training examples if you are implementing it for the % first time. Average Time : 1 hours, 23 minutes, 26 seconds: Average Speed : 317. Machine learning models need to generalize well to new examples that the model has  Video created by Stanford University for the course \"Machine Learning\". @Temitope Israel the function h(x) = theta0 + theta1*x. m = length(y); % number of training examples. Although Machine learning has run several times since its first offering and it doesn’t seem to have been changed or updated much since then, it holds up quite well. Cost function is a function of the vector value. A contour line of a two variable function has a constant value at all points of the same line. We want to set the parameters in order to achieve a minimal difference between the predicted and the real values. In this module, we introduce the notion of classification, the cost function for logistic regression, and the application of logistic regression to multi-class classification. Our cost function now outputs a k dimensional vector h ɵ (x) is a k dimensional vector, so h ɵ (x) i refers to the ith value in that vector. 3 Feedforward and cost function. If you’ve seen linear regression before, you may recognize this as the familiar In almost all but the most trivial situations (or when you really need to do inference in batches for some reason), making a function that operates on an list of things is worse than making a function that operates on a single item. % J = COMPUTECOST(X, y, theta) computes the cost of using theta as the. On slide #16 he writes the derivative of the cost function (with the regularization term) with respect to theta but it's in the context of the Gradient Descent algorithm. Logistic regression is a method for classifying data into discrete outcomes. We discuss the   Video created by Stanford University for the course \"Machine Learning\". Just like what we do in logistic regression, we need to minimize the cost function Jul 20, 2014 · Cost Function When we seek deeper into this function, we’ll find that the cost is 0 when the label, namely y , is 1 and the hypothesis is also 1. I took Andrew Ng's course \" Machine  by Stanford in Coursera (https://www. org In case where labeled value y is equal to 1 the hypothesis is -log(h(x)) or -log(1-h(x)) otherwise. Recommended for you % binary vector of 1's and 0's to be used with the neural network % cost function. Feb 18, 2020 · Coursera Review Snapshot. Recall that the cost function in logistic regression is: J (θ) = 1 m ∑ m i = 1 [− y (i) l o g (h θ (x (i))) − (1 − y (i)) l o g (1 − h θ (x (i)))] ∂ J (θ) ∂ θ j = 1 m ∑ m i = 1 (h θ (x (i)) − y (i)) x (i) j Official Coursera Help Center. My regression line/output looks good and the cost function decreased, but was still extremely high at the end of iterating (J(theta) = 2058715091. 5. What you can do is use gradient descent to minimize this so you can update G as G minus the derivative respect to the cost function of J of G. Just like this: Gradient Computation. This method looks at every example in the entire training set on every step, and is called batch gradient descent. May 15, 2013 · where is the logistic function (also called sigmoid function). 1 year ago Coursera provides universal Hey everyone, coursera is giving away 100 courses at $0 until 31st July, certificate of completion is also free The best part is, no credit card needed :) Anyone from anywhere can enroll. Just make sure your two arguments are column vectors of the same size. Nov 16, 2015 · The loss function (or error) is for a single training example, while the cost function is over the entire training set (or mini-batch for mini-batch gradient descent). Hence, he's also multiplying this derivative by$-\\alpha$. They will make you ♥ Physics. the intermediate values of the cost function in Coursera ml course Andrew derived the cost function for gradient descend he did show steps after this part but i still do not understand how both were derived . Programming assignments require you to write and run a computer program to solve a problem. Oct 05, 2015 · function [J, grad] = costFunction (theta, X, y) % COSTFUNCTION Compute cost and gradient for logistic regression % J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the % parameter for logistic regression and the gradient of the cost % w. We can measure the accuracy of our hypothesis function by using a cost function. % Initialize some useful values: m = length(y); % number of training examples function J = computeCost (X, y, theta) %COMPUTECOST Compute cost for linear regression % J = COMPUTECOST(X, y, theta) computes the cost of using theta as the % parameter for linear regression to fit the data points in X and y % Initialize some useful values m = length (y); % number of training examples % You need to return the following variables correctly J = 0; % ===== YOUR CODE HERE ===== % Instructions: Compute the cost of a particular choice of theta % You should set J to the cost. When theta0 = 0 and theta1 = 0. Assignment instructions: In lesson 1, we were introduced to the basics of linear regression in a univariate context. Week 1 Introduction & Linear Regression with One Variable. Jun 05, 2013 · The cost function for a neural network with output units is very similar to the logistic regression one : where is the -th unit of the output layer. The closer our hypothesis matches the training examples, the smaller the value of the cost function. So as you vary theta zero and theta one, the two parameters, you get different values of the cost function J (theta zero, theta one) and the height of this surface above a particular point of theta zero, theta one. org/learn/machine-learning) is Take in a numpy array X,y, theta and generate the cost function of using theta 2020년 4월 1일 Andrew Ng 교수님 Coursera 강의 내용 정리 노트입니다. from Wikipedia Cost Function The cost function or Sum of Squeared Errors(SSE) is a measure of how far away our hypothesis is from the optimal hypothesis. We also cover the Normal equation, mean normalisation, and feature scaling. Cost function (J) and partial derivatives of the cost w. 01, the cost function decreases slowly, which means slow convergence during gradient descent. With teachers from elite universities, it gives everyone access to a quality education without the expense of a traditional college or university. For wrapping up and resume writingvideoLecture notesProgramming assignment 1. Try checking a course page for a course in the Programming assignments require you to write and run a computer program to solve a problem. Complete the code in costFunctionReg. 이번 장에서는 모델과 비용 2018년 2월 5일 이 포스트는 Coursera에 있는 Andrew Ng 교수님의 강의 Machine Learning(링크) 를 바탕으로 작성되었습니다. 262842 Cost at theta found by fminunc: 0. 3 comments. g. After implementing Part 2, you can check Jun 05, 2013 · The cost function for a neural network with output units is very similar to the logistic regression one : where is the -th unit of the output layer. 25 0. The cost for any example is always since it is the negative log of a quantity less than one. Feb 08, 2020 · please subscribe my channel to get more videos you can contact me on email- tilahunamanuel0gmail. The minimize function tries to minimize the argument and tunes the parameters accordingly. 2. This is the most commonly used cost function for linear regression, a reasonable function to try. Learning python could be a very valuable skill to add to your toolbox- check out best coursera python courses. The goal is to find the values of model parameters for which Cost Function return as small number as possible. 9/4/2017 Cost Function | Coursera Back to Week 1 Prev Lessons Next Cost Function We can measure the accuracy of Cost Functionで検証したいのはK=10の予測の時の精度になります。 実Sample;Yは5000X1のベクトルで最初の方は10ということは Week4の課題でみてきたので、絵としては↓のようになるかと思います。 If you are machine learning on a budget then Coursera is a great choice. Make sure your code supports any number of features and Hello @bhardwaj. 009217 -11. If we initialize all the parameters of a neural network to ones instead of zeros, this will suffice for the purpose of “symmetry breaking” because the parameters are no longer symmetrically equal to zero. your goal is to minimize your cost function h(x) which means that you need to minimize the difference between predicted (y hat) values and actual (y) values, and thus the predicted results should be almost equal actual results which mean h(x) must be nearer to 0. Sep 26, 2017 · This \"Field Report\" is a bit difference from all the other reports I've done for insideBIGDATA. When we implement the function, we don't have x, we have the feature matrix X. ai | Coursera Nov 27, 2017 · In ML, cost functions are used to estimate how badly models are performing. ℹ️ Note: I had a hard time understanding this equation mainly that I had a misconception that y(i)k is a vector, instead it is just simply one number. The way we are going to minimize the cost function is by using the gradient descent. Now it is a$999 flat fee. Mar 22, 2017 · Lecture 2. % J = COSTFUNCTION(theta, X, y) computes   Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course · regression self-study. 203. Payments in some areas may include a sales tax. why are we minimizing the square of (prediction - actual) while finding out the value of theta 0 and theta 1 in hypothesis. How to compute Cost function for linear regression. Put simply, a cost function is a measure of how wrong the model is in terms of its ability to estimate the relationship between X and y. 5*1 = 0. % Initialize some useful values. Vote. Oct 24, 2019 · The cost function for logistic regression trained with examples is always greater than or equal to zero. 776289 4. Finally, we'll weight these with two hyper parameters alpha and beta to specify the relative weighting between the content costs and the style cost. Machine Learning is one of the first programming MOOCs Coursera put online by Coursera founder and Stanford Professor Andrew Ng. Artificial Intelligence - All in One 146,683 views 11:10 May 14, 2018 · For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. Dec 31, 2016 · . If your code in the previous part (single variable) already supports multiple variables, you can use it here too. The cost function J(θ) is a summation over the cost for each eample, so the cost function itself must be greater than or equal to zero. 206233 0. 우리의 목표는 Cost Function J(θ 0,θ 1) 값을 최소로 하는 θ 0, θ 1 을 구하는 것이다. session() business gradient descent is implemented via tf. Using the housing price example again, we remember that our hypothesis function will predict the price of the house (y) based on the size in squared feet (x). Each unit in the neural networks is exactly a logistic unit which works as described in the Lecture 6. Aug 08, 2018 · The purpose of Cost Function is to be either: Minimized - then returned value is usually called cost, loss or error. Recall that the cost function for the neural network (without regularization) is J (θ) = 1 m ∑ m i = 1 ∑ K k = 1 [− y (i) k l o g ((h θ (x (i))) k) − (1 − y (i) k) l o g (1 − (h θ (x (i))) k)] 1. 0. Renowned MOOC platform Coursera just launched a new Deep Learning The function will output a new feature array stored in the variable 'x. Jan 10, 2018 · Visualizing the cost function J(ϴ) We can see that the cost function is at a minimum when theta = 1. \" So, I compute the cost and verify that I get that value with the θ values I've optimized previously. Coursera ML(4)-Logistic Regression. 203498 theta: -25. Video created by Universidade de Stanford for the course \"Aprendizagem Automática\". When you start a Coursera Plus subscription, you'll be charged the annual fee once a year until you cancel the subscription. Coursera also offers complete degree programs from its partner universities. Github repo for the Course: Stanford Machine Learning (Coursera) Question 1. Remember to use element-wise multiplication with the log() function. For example, if the population for a given city is 4 and we predicted that it was 7, our error is (7-4)^2 = 3^2 = 9 (assuming an L2 or \"least squares\" loss function). In TF2+ minimize function is not present and one needs to implement gradient descent at a much lower level. function [J, grad] = costFunction (theta, X, y) %COSTFUNCTION Compute cost and gradient for logistic regression % J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the % parameter for logistic regression and the gradient of the cost % w. When I entered the program, it was \\$200 a month. To avoid non-convexity of cost function, instead of the squared difference function linear regression used, logistic regression used a cross-entropy style cost function . If we generalize this for multiple output nodes (multiclass classification) what we get is: Aug 22, 2016 · – then min cost (cost function optimization) over theta: min C * sum (for each record: y * cost1(z) + (1 — y) * cost0(z)) + reg reg = ½ sum (for each parameter except first: theta^2) C = 1/lambda // you need to select lower C to fight overfitting The best nonlinearity functions to use in a Multilayer perceptron are step functions as they allow to reconstruct the decision boundary with better precision. Coursera Plus lets you pay lets you pay an annual subscription to access the majority of the courses on Coursera. 12 May 2019 Logistic Regression as a Neural Network – Binary Classification – Logistic Regression – Logistic Regression Cost Function – Gradient Descent 2019년 5월 12일 학습의 결과가 비용함수 (Cost Function) 로 평가되기 때문에 Gradient Descent 알고리즘으로 Cost Function (비용함수)의 변화에 대해서 확인해  We discuss the application of linear regression to housing price prediction, present the notion of a cost function, and introduce the gradient descent method for  2017년 12월 18일 본 내용은 Coursera에서 Andrew ng 의 Machine Learning(기계학습, 머신러닝)을 수강한 내용을 정리한 것입니다. Topics include the implementation of functions, function types, object-orientation in Go, methods, and class instantiation. Rather than iteratively finding it, it can do it with one iteration of an equation (found here on Coursera). 21221 at the final iteration). 45744 which is the cost of using Θ (0,0,0) as parameters function [J, grad] = lrCostFunction (theta, X, y, lambda) %LRCOSTFUNCTION Compute cost and gradient for logistic regression with %regularization % J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using % theta as the parameter for regularized logistic regression and the % gradient of the cost w. In least-squares models, the cost function is defined as the square of the difference between the predicted value and the actual value as a function of the input. The cost function is a summation over the cost for each sample, so the cost function itself must be greater than or equal to zero. 161272 0. dot (theta) a = (hypothesis -y) ** 2 J = (1 / (2 * m) * sum (a)) return J 1. Github repo for the Course: Stanford Machine Learning (Coursera) Quiz Needs to be viewed here at the repo (because the image solutions cant be viewed as part of a gist) The course may offer \"Full Course, No Certificate\" instead. Artificial Intelligence - All in One 146,683 views 11:10 % NNCOSTFUNCTION Implements the neural network cost function for a two layer % neural network which performs classification % [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, % X, y, lambda) computes the cost and gradient of the neural network. m = length(y); % number of training examples Cost function (J) and partial derivatives of the cost w. What is it: Coursera is the world’s leading online learning platform. 287629 and cost + newCost should be 0. The cost function J(θ) for logistic regression trained with examples is always greater than or equal to zero. Learn to set up a machine learning problem with a neural network  Video created by deeplearning. org And this is for the case where there is only one node in the output layer of Neural Network. Let's start with the hypothesis. Sep 28, 2019 · In the given figure, the cost function has been plotted against and , as shown in ‘Plot 2’. 201470 For a student with scores 45 and 85, we predict an admission probability of 0. Working Subscribe Subscribed Unsubscribe 1. m to check the unregularized cost is correct, then you can submit Part 1 to the grader. 100000 -12. Our overall cost function is 1 over m times the sum over the trading set of the cost of making different predictions on the different examples of labels y i. In this process, you're actually updating the pixel values of this image G which is a 100 by 100 by 3 maybe rgb channel image. Lectures by Walter Lewin. m and gradientDescentMulti. - lrCostFunction Vectorized logistic regression with regularization using gradient descent for the Coursera course Machine Learning. machine learning, quiz - model and costfunction. Maximized - then the value it yields is named a reward. Therefore our hypothesis, a linear regression equation in this case, is: are parameters that will determine how our hypothesis look like. cost function coursera" ]
[ null, "http://gm-machinerie.com/zsvcz7zmk/cost-function-coursera.html", null, "http://gm-machinerie.com/zsvcz7zmk/cost-function-coursera.html", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86804473,"math_prob":0.9416657,"size":37887,"snap":"2020-24-2020-29","text_gpt3_token_len":9317,"char_repetition_ratio":0.18834306,"word_repetition_ratio":0.18165137,"special_character_ratio":0.23878904,"punctuation_ratio":0.10850035,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978129,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-05T04:05:45Z\",\"WARC-Record-ID\":\"<urn:uuid:ddb03fea-5c35-45ea-ae53-3bc491dae1dd>\",\"Content-Length\":\"46304\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:95cf8105-e036-4600-ac4d-792b2091a411>\",\"WARC-Concurrent-To\":\"<urn:uuid:343c42bd-4b4f-4d22-809c-8c5e48b5bea9>\",\"WARC-IP-Address\":\"199.188.200.74\",\"WARC-Target-URI\":\"http://gm-machinerie.com/zsvcz7zmk/cost-function-coursera.html\",\"WARC-Payload-Digest\":\"sha1:4EW5RZMJKY27QZSECSXLD5SACNNV7S5X\",\"WARC-Block-Digest\":\"sha1:YFWLZUQW4BKZJFKOKC34GPWZTABX3H5K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348492427.71_warc_CC-MAIN-20200605014501-20200605044501-00273.warc.gz\"}"}
https://www.colorhexa.com/207a01
[ "# #207a01 Color Information\n\nIn a RGB color space, hex #207a01 is composed of 12.5% red, 47.8% green and 0.4% blue. Whereas in a CMYK color space, it is composed of 73.8% cyan, 0% magenta, 99.2% yellow and 52.2% black. It has a hue angle of 104.6 degrees, a saturation of 98.4% and a lightness of 24.1%. #207a01 color hex could be obtained by blending #40f402 with #000000. Closest websafe color is: #336600.\n\n• R 13\n• G 48\n• B 0\nRGB color chart\n• C 74\n• M 0\n• Y 99\n• K 52\nCMYK color chart\n\n#207a01 color description : Dark green.\n\n# #207a01 Color Conversion\n\nThe hexadecimal color #207a01 has RGB values of R:32, G:122, B:1 and CMYK values of C:0.74, M:0, Y:0.99, K:0.52. Its decimal value is 2128385.\n\nHex triplet RGB Decimal 207a01 `#207a01` 32, 122, 1 `rgb(32,122,1)` 12.5, 47.8, 0.4 `rgb(12.5%,47.8%,0.4%)` 74, 0, 99, 52 104.6°, 98.4, 24.1 `hsl(104.6,98.4%,24.1%)` 104.6°, 99.2, 47.8 336600 `#336600`\nCIE-LAB 44.558, -45.992, 48.517 7.56, 14.228, 2.376 0.313, 0.589, 14.228 44.558, 66.852, 133.469 44.558, -37.804, 53.884 37.719, -30.231, 22.668 00100000, 01111010, 00000001\n\n# Color Schemes with #207a01\n\n• #207a01\n``#207a01` `rgb(32,122,1)``\n• #5b017a\n``#5b017a` `rgb(91,1,122)``\nComplementary Color\n• #5d7a01\n``#5d7a01` `rgb(93,122,1)``\n• #207a01\n``#207a01` `rgb(32,122,1)``\n• #017a1e\n``#017a1e` `rgb(1,122,30)``\nAnalogous Color\n• #7a015d\n``#7a015d` `rgb(122,1,93)``\n• #207a01\n``#207a01` `rgb(32,122,1)``\n• #1e017a\n``#1e017a` `rgb(30,1,122)``\nSplit Complementary Color\n• #7a0120\n``#7a0120` `rgb(122,1,32)``\n• #207a01\n``#207a01` `rgb(32,122,1)``\n• #01207a\n``#01207a` `rgb(1,32,122)``\n• #7a5b01\n``#7a5b01` `rgb(122,91,1)``\n• #207a01\n``#207a01` `rgb(32,122,1)``\n• #01207a\n``#01207a` `rgb(1,32,122)``\n• #5b017a\n``#5b017a` `rgb(91,1,122)``\n• #0c2e00\n``#0c2e00` `rgb(12,46,0)``\n• #134701\n``#134701` `rgb(19,71,1)``\n• #196101\n``#196101` `rgb(25,97,1)``\n• #207a01\n``#207a01` `rgb(32,122,1)``\n• #279301\n``#279301` `rgb(39,147,1)``\n``#2dad01` `rgb(45,173,1)``\n• #34c602\n``#34c602` `rgb(52,198,2)``\nMonochromatic Color\n\n# Alternatives to #207a01\n\nBelow, you can see some colors close to #207a01. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #3e7a01\n``#3e7a01` `rgb(62,122,1)``\n• #347a01\n``#347a01` `rgb(52,122,1)``\n• #2a7a01\n``#2a7a01` `rgb(42,122,1)``\n• #207a01\n``#207a01` `rgb(32,122,1)``\n• #167a01\n``#167a01` `rgb(22,122,1)``\n• #0c7a01\n``#0c7a01` `rgb(12,122,1)``\n• #027a01\n``#027a01` `rgb(2,122,1)``\nSimilar Colors\n\n# #207a01 Preview\n\nThis text has a font color of #207a01.\n\n``<span style=\"color:#207a01;\">Text here</span>``\n#207a01 background color\n\nThis paragraph has a background color of #207a01.\n\n``<p style=\"background-color:#207a01;\">Content here</p>``\n#207a01 border color\n\nThis element has a border color of #207a01.\n\n``<div style=\"border:1px solid #207a01;\">Content here</div>``\nCSS codes\n``.text {color:#207a01;}``\n``.background {background-color:#207a01;}``\n``.border {border:1px solid #207a01;}``\n\n# Shades and Tints of #207a01\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010500 is the darkest color, while #f4fff1 is the lightest one.\n\n• #010500\n``#010500` `rgb(1,5,0)``\n• #061900\n``#061900` `rgb(6,25,0)``\n• #0c2c00\n``#0c2c00` `rgb(12,44,0)``\n• #114001\n``#114001` `rgb(17,64,1)``\n• #165301\n``#165301` `rgb(22,83,1)``\n• #1b6701\n``#1b6701` `rgb(27,103,1)``\n• #207a01\n``#207a01` `rgb(32,122,1)``\n• #258d01\n``#258d01` `rgb(37,141,1)``\n• #2aa101\n``#2aa101` `rgb(42,161,1)``\n• #2fb401\n``#2fb401` `rgb(47,180,1)``\n• #34c802\n``#34c802` `rgb(52,200,2)``\n``#3adb02` `rgb(58,219,2)``\n• #3fef02\n``#3fef02` `rgb(63,239,2)``\n• #46fd07\n``#46fd07` `rgb(70,253,7)``\n• #55fd1b\n``#55fd1b` `rgb(85,253,27)``\n• #63fd2e\n``#63fd2e` `rgb(99,253,46)``\n• #72fd42\n``#72fd42` `rgb(114,253,66)``\n• #80fe55\n``#80fe55` `rgb(128,254,85)``\n• #8ffe69\n``#8ffe69` `rgb(143,254,105)``\n• #9dfe7c\n``#9dfe7c` `rgb(157,254,124)``\n• #acfe90\n``#acfe90` `rgb(172,254,144)``\n• #bafea3\n``#bafea3` `rgb(186,254,163)``\n• #c9feb6\n``#c9feb6` `rgb(201,254,182)``\n• #d7ffca\n``#d7ffca` `rgb(215,255,202)``\n• #e6ffdd\n``#e6ffdd` `rgb(230,255,221)``\n• #f4fff1\n``#f4fff1` `rgb(244,255,241)``\nTint Color Variation\n\n# Tones of #207a01\n\nA tone is produced by adding gray to any pure hue. In this case, #3c413a is the less saturated color, while #207a01 is the most saturated one.\n\n• #3c413a\n``#3c413a` `rgb(60,65,58)``\n• #394635\n``#394635` `rgb(57,70,53)``\n• #374b30\n``#374b30` `rgb(55,75,48)``\n• #354f2c\n``#354f2c` `rgb(53,79,44)``\n• #325427\n``#325427` `rgb(50,84,39)``\n• #305922\n``#305922` `rgb(48,89,34)``\n• #2e5e1d\n``#2e5e1d` `rgb(46,94,29)``\n• #2c6219\n``#2c6219` `rgb(44,98,25)``\n• #296714\n``#296714` `rgb(41,103,20)``\n• #276c0f\n``#276c0f` `rgb(39,108,15)``\n• #25710a\n``#25710a` `rgb(37,113,10)``\n• #227506\n``#227506` `rgb(34,117,6)``\n• #207a01\n``#207a01` `rgb(32,122,1)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #207a01 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5627419,"math_prob":0.650482,"size":3656,"snap":"2020-10-2020-16","text_gpt3_token_len":1607,"char_repetition_ratio":0.13143483,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5675602,"punctuation_ratio":0.23809524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99091804,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-19T11:04:33Z\",\"WARC-Record-ID\":\"<urn:uuid:094b0552-62ac-464e-906f-3f420e5c6370>\",\"Content-Length\":\"36199\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e0d81587-e040-4f46-b1ca-e21de2bc50a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:f57f2203-0b14-4079-afd7-db3afe483719>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/207a01\",\"WARC-Payload-Digest\":\"sha1:AEGF6GVJB2YODV7U3ZZ42GVJV77DUBOL\",\"WARC-Block-Digest\":\"sha1:CJFCKBIMOZ7VNVQL3UD5GLWF6WUZRXCJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144111.17_warc_CC-MAIN-20200219092153-20200219122153-00131.warc.gz\"}"}
https://patents.justia.com/patent/6476752
[ "Signal processing system employing two bit ternary signal\n\nA signal processing system in which a differentially quantized digital signal is used, the digital signal having two bits which represent only three values, of which one value (10 or 01) is zero and the other two (11 and 00) values represent a positive value and a negative value respectively.\n\nLatest Sony United Kingdom Limited Patents:\n\nDescription\nBACKGROUND OF THE INVENTION\n\n1. Field of the Invention\n\nThe present invention relates to a digital signal processor. A preferred embodiment of the invention relates to a digital audio signal processor. For convenience, reference will be made herein to audio signal processing, but the invention is not limited to audio signals.\n\n2. Description of the Prior Art\n\nIt is well known to convert analogue signals to digital signals using differential quantization. In differential quantization a signal is sampled, and the difference between the value of a sample and a prediction of the value of the sample is quantized. The prediction may be the previous sample.\n\nIn general principle the difference can be quantized and represented by an m-bit signal where m is any integer, greater than or equal to one.\n\nCommon values of m in practice include 1, 8 and 16. Except for m=1, the differences are represented by signed numbers, e.g. 2's complement numbers. A 1-bit signal represents 2 signal levels. Greater numbers of bits represent greater numbers of levels. For example an 8 bit signal represents 256 levels.\n\nA 1-bit digital audio signal processor has been proposed in for example GB-A-2 319 931. The processor includes Delta-Sigma Modulator (DSM) filter sections. A 1-bit digital signal processor produces a 1-bit output that contains an audio signal which is obscured by quantization noise to an unacceptable level. It is imperative that the noise spectrum is suitably shaped to place as much of the noise as possible outside the audio signal band. The noise is produced mainly by the quantization of the audio signal to 1-bit.\n\nA DSM filter section is designed to suitably shape the noise to minimise the noise in the audio band. A DSM filter section typically includes, amongst other circuits, at least one multiplier and a quantizer. The multiplier forms the product of an n-bit coefficient and the 1-bit signal. The quantizer re-quantizes the product as a 1-bit signal. The other circuits of a DSM filter section typically include delay elements and adders.\n\nWhilst processing a 1-bit signal involves difficult quantization noise shaping it has advantages of simplicity of hardware. A 1-bit multiplier for example is a relatively simple circuit. Furthermore a 1-bit signal processor has the known advantages of an inherently serial structure, and a phase response and distortion approaching that of a high quality analogue system whilst retaining the advantages of digital techniques.\n\nIt is desired to reduce quantization noise and yet retain the many benefits of a 1-bit signal processing system.\n\nSUMMARY OF THE INVENTION\n\nAccording to one aspect of the present invention, there is provided a digital signal processing system in which differentially quantized data is represented by a 2-bit digital signal representing only 3 levels of which one level is zero and the other 2 levels are more positive and more negative respectively than zero.\n\nThe provision of such a differentially quantized digital signal, referred to hereinafter as a “ternary signal”, having 2 bits representing only 3 levels produces better signal to noise ratio than a 1-bit signal, making a DSM filter section easier to design for the purpose of noise shaping. Also with the 2 bit representation of the 3 levels, in a preferred embodiment, only a minimal modification needs to be made to the multiplier(s) and the quantizer of what is otherwise a DSM filter section of a 1-bit signal processor. The modification is very cost effective.\n\nThe three levels may be represented by 11 for a positive increment relative to zero, 00 for a negative increment relative to zero and 01 or 10 for zero. It will be noted that two signal lines of a 2-bit parallel bus carrying the two bits may be reversed without affecting the values represented by the ternary signal.\n\nThe ternary signal used in the present invention differs from a “conventional” 2 bit signal in that a conventional 2 bit signal represents four levels represented by 00, 01, 10, 11 respectively. The represented levels are either asymmetric with respect to zero or none of them represents zero per se. A signal processor using such a conventional 2 bit signal requires more gates than the ternary signal used in the present invention and is thus less cost effective than the present invention.\n\nAccording to another aspect of the invention, there is provided a signal processing system comprising a plurality of Delta Sigma Modulators (DSMs) in cascade, at least one of the DSMs being arranged to receive an r bit signal and to output a q bit signal, where at least one of r and q is a ternary signal having two bits which represent only three values, of which one value is zero, and the other two values represent a positive value and a negative value respectively.\n\nThe ternary signal reduces high frequency noise, allowing the design of cascaded DSMs to be simpler because noise induced instability is reduced.\n\nBRIEF DESCRIPTION OF THE DRAWINGS\n\nThe above and other objects, features and advantages of the invention will be apparent from the following detailed description of illustrative embodiments which is to be read in connection with the accompanying drawings, in which:\n\nFIG. 1 is a block diagram of an audio signal processing system incorporating an embodiment of the present invention;\n\nFIG. 2 is a schematic block diagram of a quantizer of an analogue to digital converter of the system of FIG. 1;\n\nFIG. 3 is a diagram explaining the quantization of an analogue signal;\n\nFIG. 4 is a schematic block diagram of an example of a DSM filter section;\n\nFIG. 5 is a schematic block diagram of a modification of the DSM of FIG. 4 for use as a mixer;\n\nFIG. 6 is a block diagram of a multiplier of the DSM of FIG. 4 or 5;\n\nFIG. 7 is a block diagram of a quantizer of the DSM of FIG. 4 or 5; and\n\nFIG. 8 is a block diagram of an example of a DSM; and\n\nFIG. 9 is a block diagram of cascaded DSMs.\n\nDESCRIPTION OF THE PREFERRED EMBODIMENTS\n\nReferring to FIG. 1 an analogue signal source, in this example, a microphone 10, produces an analogue audio signal which may be amplified and/or level adjusted in an amplifier 12 to produce a signal whose level matches that of an Analogue to Digital (A/D) converter 14.\n\nThe A/D converter 14 is a Sigma-Delta converter which samples the analogue signal and produces a digital signal representing the difference between the value of a sample and a prediction of the value of the sample. The principles of operation of Sigma Delta Converters are well known and the converter 14 therefore does not require detailed explanation. In accordance with the embodiment of the invention of FIG. 1, the converter 15 includes a quantizer 16 (an example of which is shown in more detail in FIG. 2) which produces a 2-bit signal representing only three analogue signal levels as follows:\n\nDIGITAL ANALOGUE 11 +1 10 or 01 0 00 −1\n\nThe 2-bit signal is thus a ternary signal and is referred to as a ternary signal hereinafter. Because the ternary signal represents differences between sample values and predictions, the levels +1, 0, −1 may also be regarded as step changes in value.\n\nThe ternary signal at the output of the converter 14 is a 2-bit parallel signal on two signal lines.\n\nThe ternary signal is fed to a signal processor 18. The signal processor may include an equaliser which adjusts the amplitude/frequency characteristics of the audio signal. The signal processor 18 may be an audio mixer. The signal processor 18 may adjust the dynamics of the audio signal. For any of these functions, this example of the processor 18 comprises DSM filter sections. Examples of DSM filter sections are shown in more detail in FIGS. 4 to 8.\n\nThe processed ternary signal may be provided to a recording device such as an optical disc recorder or tape recorder, or may be provided to a transmission channel, as indicated by the block 19.\n\nThe ternary signal may be received from the transmission channel or reproduced from the recording device and fed to another processor 118 which may be similar to processor 18. To reproduce the analogue audio signal, the processed ternary signal is converted to analogue in a digital to analogue converter 114, amplified by an amplifier 112 and, for example, reproduced by a loud speaker 110.\n\nIt will be noted that the ternary signal is fed from each of the functional blocks 14, 18, 19, 118, 114 to the next block via a 2 bit parallel bus.\n\nReferring to FIG. 2, an example of the quantizer 16 is shown.\n\nThe quantizer comprises first and second comparaters 20, 22. The analogue signal is fed to each comparator via an input 24. The first comparator 20 compares the value of the analogue signal with a reference value of +½. If the value of analogue signal exceeds (is more positive than) +½ the first comparator 20 outputs logic 1, otherwise it outputs logic 0. The second comparator compares the value of the analogue signal with a reference value −½. If the value of the analogue signal exceeds (is more positive than) −½ the second comparator outputs logic 1, otherwise it outputs logic 0.\n\nThus as shown in FIG. 3, a 2-bit ternary signal is produced according to the following table.\n\nInput Output Nominal Analogue Signal First Second Value Value x Comparator Comparator −1 x < −½ 0 0 0 −½ < x < +½ 0 1 +1 x > +½ 1 1\n\nThe 2-bit ternary signal has no sign bits. It has a value 01 (or 10) which represents no change between successive signal samples. It is symmetrical in the sense that 00 and 11 represent equal increments each side of zero, and that there are equal numbers (one) of levels each side of zero. It should be noted that the ternary signal is insensitive to reversal of the 2 signal lines. Because −1 is represented by 00 and +1 by 11, reversing the signal lines does not alter the value of the signal. As will become apparent, the value of 0 is represented equally by 01 and 10, so again reversing the signal lines does not alter the value of the signal.\n\nThe ternary signal has other advantages. Compared to a 1-bit signal it has a better signal to noise ratio and it has less high frequency quantization noise, making the design of the filter sections of the processor 18 simpler whilst retaining the advantages of a 1-bit signal with minimal extra hardware cost, as will become apparent. The reduced high frequency noise makes cascading of filters simpler. Compared to a 2-bit signal representing 4 levels it has the advantage of lower hardware cost.\n\nFIG. 4 is a block diagram of an example of a generalised mth order filter section which may be used in the processor 18 or 118. The filter section has a two-bit parallel input bus 2 for the input ternary signal x(n), and a two bit parallel output bus 5 for the output, processed ternary signal Y(n). The order m of the filter may be one or greater.\n\nThe filter section comprises m integrator sections and a final section. Bits are clocked through the filter section by known clocking arrangements (not shown).\n\nThe output signal Y(n) is produced by a quantizer Q in the final stage. An example of the quantizer Q is shown in FIG. 7. The quantizer Q requantizes the n-bit signal at its input as the ternary signal\n\n+1 11 0 01 or 10 −1 00\n\nThe integrator sections include multipliers A1 to A6 and C1 to C6. An example of a suitable multiplier is shown in FIG. 6.\n\nThe first integrator section comprises a first coefficient multiplier A1 connected to the input 2, a second coefficient multiplier C1 connected to the output 5 of the quantizer Q, an adder 61 which sums the outputs of the multipliers A1 and C1 and an integrator 71 which integrates the output of the adder 61. The integrator has a delay of 1 sample period. The coefficient multipliers A1, C1 multiply the ternary signals by p-bit coefficients A1 and C1.\n\nEach of the m−1 intermediate integrator sections likewise comprises a first coefficient multiplier A2 . . . A5 connected to the input 2, a second coefficient multiplier C2 . . . C5 connected to the quantizer Q, an adder 62 . . . 65 and an integrator 72 . . . 75. Each of the adders 62 to 65 receives the output of the integrator of the preceding stage in addition to the outputs of the coefficient multipliers A and C.\n\nThe final section comprises a coefficient multiplier A6 connected directly to the input 2, an adder 66, and the quantizer Q. The adder 66 is connected to the output of the delay element of the integrator 75 of the preceding stage, and to the multiplier A6.\n\nThe illustrative filter section of FIG. 4 also comprises feedback multipliers &agr;, &bgr;,&ggr;.\n\nMultiplier &agr; multiples the output of the integrator 72 by a coefficient &agr; and feeds the product back to adder 61. Multiplier &bgr; multiplies the output of integrator 74 by a coefficient &bgr; and feeds the product back to adder 64. Multiplier &ggr; multiplies the output of integrator 75 by a coefficient &ggr; and feeds the product back to adder 65.\n\nThe multipliers, &agr;, &bgr;, &ggr; allow the filter section to provide for example a Chebyschev characteristic. They may be omitted if another characteristic is required.\n\nThe coefficients A1 to A6, C1 to C6 and &agr;, &bgr;, &ggr; may all be fixed to provide a fixed filter characteristic.\n\nCoefficients A1 to A6 control the gain of the filter section and may be variable. If coefficients A1 to A6 are variable, a coefficient generator 40 is provided to control them in accordance with a control signal 41. The generator 40 may comprise a microcomputer. For the purposes of the following description is it assumed that the multipliers &agr;, &bgr; and &ggr; are omitted.\n\nThe filter section of FIG. 4 may be modified to operate as a signal mixer. Referring to FIG. 5, which shows one of the intermediate sections, each of the integrator sections and the final section has, in addition to its multiplier A (which is connected to input 2 for receiving a first ternary audio signal), an additional multiplier B connected to another input 22 for receiving a second ternary audio signal to be mixed with the first audio signal. Multiplier B is connected to the adder 6 of the section.\n\nThe multipliers A and B multiply the first and second audio signals by coefficients A and B to perform signal mixing. The coefficients A and B are preferably variable to perform variable mixing.\n\nThe coefficients A1 to A6, and the coefficients C1 to C6 and, if provided, the coefficients B, and/or &agr;, &bgr;, &ggr; may be chosen by methods known in the art, to provide a desired audio signal processing characteristic. The coefficients must also be chosen to shape the quantization noise to minimise it in the frequency band of the audio signal.\n\nFor example, the coefficients A and C, and B if provided, may be chosen by:\n\na) finding the Z-transform H(z) of the desired filter characteristic—e.g. noise shaping function; and\n\nb) transforming H(z) to coefficients.\n\nThis may be done by the methods described in the papers “Theory and Practical Implementation of a Fifth Order Sigma-Delta A/D Converter, Journal of Audio Engineering Society, Volume 39, no. 7/8, 1991 July/August by R. W Adams et al.”\n\nand using the knowledge of those skilled in the art. One way of calculating the coefficients A and C is outlined in the accompanying Annex A.\n\nThe multipliers A, B, C multiply the two bit ternary signal by an n-bit coefficient where n>2 to produce an n-bit number. The adders 6 are suitable n-bit adders, known in the art. The delay elements Z−1 are also suitable elements known in the art.\n\nAn example of a circuit suitable as a multiplier A, B or C is shown in FIG. 6. It is assumed that the coefficients are n-bit 2's complement numbers. The multiplier multiplies the 2-bit ternary signal by the n-bit coefficient.\n\nThe 2-bit ternary signal represents +1, 0, −1. If the coefficient is N, the product of the multiplier is +N, 0 or −N, depending on the value of the ternary signal.\n\nFIG. 6 shows, by way of example, a 4 bit multiplier. To multiply the 4 bit 2's complement number N by the ternary signal it is noted that if the ternary signal is 01 or 10 representing 0, the output is 0. If the ternary signal is 11 representing +1 the output is the coefficient N unchanged. If the ternary signal is 00 representing −1, the output is the coefficient inverted in sign.\n\nThe multiplier comprises n Exclusive −OR gates 62, which receive respective bits of the coefficient N at first inputs. Second inputs of the gates 62 receive, in common, either one of the bits of the ternary signal, via an inverter 66.\n\nThe outputs of the n Exclusive −OR gates are connected to respective first inputs of n AND gates 64. The second inputs of the AND gates are connected, in common, to the output of an Exclusive −NOR gate 60 which receives the two bits of the ternary signal. The Exclusive −NOR gate 60 has the truth table:\n\nVALUE INPUT OUTPUT −1 0 0 1 0 1 0 0 0 0 1 0 +1 1 1 1\n\nThus if the ternary signal represent 0, (01, 10) the AND gates output all logic 0 regardless of the outputs of the Exclusive OR gates. 0000 is 2's complement representation for zero. If the ternary signal represents −1 (00) or +1 (11), the outputs of the AND gates 64 depend on the outputs of the Exclusive −OR gates 62.\n\nAn Exclusive OR gate has the truth table.\n\nVALUE INPUT OUTPUT −1 0 0 0 0 1 0 1 0 0 1 1 +1 1 1 0\n\nIf one input to the gate is logic 1, the output is the other input bit inverted. If one input to the gate is logic 0, the output is the other input bit unchanged.\n\nThus for a ternary signal 11, (+1) the inverter 66 inverts one of two the bits to zero. The outputs of the gates 62 are thus the bits of the coefficient N unchanged, i.e. a 2's complement number N.\n\nFor a ternary signal 00, (−1) the outputs of the gates 62 are the bits of the coefficient N inverted, producing a 1's complement number.\n\nThe 1's complement number may be regarded as a sufficient approximation to a 2's complement number. However, in a preferred embodiment of the invention it is converted to a true 2's complement number by adding ‘1’ to it. Referring by way of example to FIG. 4 or 5, each section of a filter includes a p-bit adder 6. The occurrence of value 00 (−1) is detected in the multiplier of FIG. 7 by a NOR gate 65, which produces logic ‘1’ in response to 00 (−1). Logic ‘1’ produced by the NOR gate 65 thus indicates the presence of decimal −1 referred to as “neg” in FIG. 4 or 5. The output ‘1’ of the NOR gate is provided to a carry input of the adder 6 of the stage of FIG. 5. For the DSM of FIG. 4 or 5, it is assumed all the multipliers A, B and C operate on a ternary signal and produce a 1's complement number in response to 00 (−1), together with the ‘neg’ signal and the associated adder converts the 1's complement number to 2's complement.\n\nFIG. 7 is an example of the quantizer Q of FIG. 4. The quantizer receives at its input for example a 4-bit 2's complement number. For 2's complement positive numbers, including zero, the MSB is always zero, and for 2's complement negative numbers the MSB is always 1.\n\nFor values +4 to +7 the next MSB is one, whilst for values 0 to +3 the next MSB is zero. For values −1 to −3 the next MSB is one and for values −4 to −8 the next MSB is zero.\n\nThus the quantizer comprises two signal lines connected to receive the MSB and the next MSB and an inverter which inverts the MSB of the 2's complement number. The other bits of the 4-bit number are not connected (N/C).\n\nValue MSB Next MSB Quantized Value +4-+7 0 1 11 +1 +3-0   0 0 10 0 −1-−3 1 1 01 0 −3-−8 1 0 00 −1\n\nThe digital to analogue converter 114 of the example of FIG. 1 receives a 2-bit ternary signal and converts it to an analogue signal. However, the converter 114 may receive a 1-bit signal as will become apparent from the following discussion of other illustrative embodiments of the invention as shown in FIGS. 8 and 9.\n\nFIG. 8 shows a DSM which is a simplified version of the DSM of FIG. 4. It comprises an input for receiving a q bit signal, a Quantiser Q which outputs an r bit signal, and a plurality of sections. A more detailed description is given with reference to FIG. 4.\n\nThe q-bit input signal may be a 1-bit signal and the r-bit output signal may be a ternary signal in accordance with the present invention. Alternatively, the q-bit input signal may be a ternary signal and the r bit output signal a 1-bit signal. As another alternative both may be ternary signals as described with reference to FIG. 4. The input and output signals may have different forms because they are independent: they are both converted by the adders and multipliers to n-bit two's complement numbers before being combined in the adders of the stages of the DSM. The r bit output signal is dependent on the choice of the quantiser Q.\n\nFIG. 9 shows a plurality of DSMs, 90, 91, 92 cascaded, i.e. connected in series. The final DSM 92 is connected to a D/A converter 114. In the example of FIG. 9 the final DSM receives a ternary signal and outputs a 1-bit signal to the converter 114. The preceding DSMs 90 and 91 receive and output ternary signals for this example. The first DSM 90 could receive a 1-bit signal and output a ternary signal as one example of a modification. The use of a ternary signal in the cascaded DSMs reduces problems of instability due to build up of noise, because the HF noise content is reduced compared to a 1-bit signal.\n\nAlthough embodiments of the invention have been described by way of example with reference to 4 bit coefficients, the invention is not limited to 4-bit coefficients. The coefficients may have any suitable number, of bits greater than 1, preferably much greater than 1.\n\nThe filter sections of FIGS. 4, 5 and 8 are illustrative. Other sorts of filter sections having a different structure may be used. Many other structures of DSM filter sections are known in the art and are within the scope of the present invention.\n\nThe source of the signals may be any suitable analogue signal source. For the example of audio signals, the source may be an analogue disc or tape record.\n\nThe invention is not limited to audio signals.\n\nAlthough illustrative embodiments of the invention have been described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications can be effected therein by one skilled in the art without departing from the scope and spirit of the invention as defined by the appended claims.\n\nCALCULATING COEFFICIENTS\n\nThis annex outlines a procedure for analysing a fifth order DSM and for calculating coefficients of a desired filter characteristic.\n\nA fifth order DSM is shown in Figure A having coefficients a to f and A to E, adders 6 and integrators 7. Integrators 7 each provide a unit delay. The outputs of the integrators are denoted from left to right s to w. The input to the DSM is a signal x[n] where [n] denotes a sample in a clocked sequence of samples. The input to the quantizer Q is denoted y[n] which is also the output signal of the DSM. The analysis is based on a model of operation which assumes quantizer Q is simply an adder which adds random noise to the processed signal. The quantizer is therefore ignored in this analysis.\n\nThe signal y[n]=fx[n]+w[n] i.e. output signal y[n] at sample [n] is the input signal x[n] multiplied by coefficient f plus the output w[n] of the preceding integrator 7.\n\nApplying the same principles to each output signal of the integrators 7 results in Equations set 1.\n\ny[n]=fx[n]+w[n]\n\nw[n]=w[n−1]+ex[n−1]+Ey[n−1]+v[n−1]\n\nv[n]=v[n−1]+dx[n−1]+Dy[n−1]+u[n−1]\n\nu[n]=u[n−1]+cx[n−1]+Cy[n−1]+t[n−1]\n\nt[n]=t[n−1]+bx[n−1]+By[n−1]+s[n−1]\n\ns[n]=s[n−1]+ax[n−1]+Ay[n−1]\n\nThese equations are transformed into z-transform equations as well known in the art resulting in equations set 2.\n\nY(z)=fX(z)+W(z)\n\nW(z)(1−z−1)=z−1(eX(z)+EY(z)+V(z))\n\nV(z)(1−z−1)=z−1(dX(z)+DY(z)+U(z))\n\nU(z)(1−z−1)=z−1(cX(z)+CY(z)+T(z))\n\nT(z)(1−z−1)=z−1(bX(z)+BY(z)+S(z))\n\nS(z)(1−z−1)=z−1(aX(z)+AY(z))\n\nThe z transform equations can be solved to derive Y(z) as a single function of X(z) (Equation 3) Y ⁡ ( z ) = fX ⁡ ( z ) + z - 1 ( 1 - z 1 ) ⁢ ( eX ⁡ ( z ) + EY ⁡ ( z ) + z - 1 1 - z - 1 ⁢ ( dX ⁡ ( z ) + DY ⁡ ( z ) + z - 1 1 - z - 1 ⁢ ( cX ⁡ ( z ) + CY ⁡ ( z ) + z - 1 1 - z - 1 ⁢ ( bX ⁡ ( z ) + BY ⁡ ( z ) + z - 1 1 - z - 1 ⁢ ( aX ⁡ ( z ) + AY ⁡ ( z ) ) ) ) ) )\n\nThis may be reexpressed as shown in the right hand side of the following equation, Equation 4. A desired transfer function of the DSM can be expressed in series form Y ⁡ ( z ) X ⁡ ( z )\n\ngiven in left hand side of the following equation and equated with the right hand side in Equation 4. Y ⁡ ( z ) X ⁡ ( z ) = α 0 + α 1 ⁢ z - 1 + α 2 ⁢ z - 2 + α 3 ⁢ z - 3 + α 4 ⁢ z - 4 + α 5 ⁢ z - 5 β 0 + β 1 ⁢ z - 1 + β 2 ⁢ z - 2 + β 3 ⁢ z - 3 + β 4 ⁢ z - 4 + β 5 ⁢ z - 5 ⁢ ⁢   = f ⁡ ( 1 - z - 1 ) 5 + z - 1 ⁢ e ⁡ ( 1 - z - 1 ) 4 + z - 2 ⁢ d ⁡ ( 1 - z - 1 ) 3 + z - 3 ⁢ c ⁡ ( 1 - z - 1 ) 2 + z - 4 ⁢ b ⁡ ( 1 - z - 1 ) + z - 5 ⁢ a ( 1 - z - 1 ) 5 - z - 1 ⁢ E ⁡ ( 1 - z - 1 ) 4 - z - 2 ⁢ D ⁡ ( 1 - x - 1 ) 3 - z - 3 ⁢ C ⁡ ( 1 - z - 1 ) 2 - z - 4 ⁢ B ⁡ ( 1 - z - 1 ) - Z - 5 ⁢ A\n\nEquation 4 can be solved to derive the coefficients f to a from the coefficients &agr;0 to &agr;5 and coefficients E to A from the coefficients &bgr;0 to &bgr;5 as follows noting that the coefficients &agr;n and &bgr;n are chosen in known manner to provide a desired transfer function.\n\nf is the only z0 term in the numerator. Therefore f=&agr;0.\n\nThe term &agr;0(1−z−1)5 is then subtracted from the left hand numerator resulting in\n\n&agr;0+&agr;1z−1 . . . + . . . &agr;5 z−5−&agr;0(1−z−1)5\n\nwhich is recalculated.\n\nSimilarly f(1−z−1)5 is subtracted from the right hand numerator. Then e is the only z−1 term and can be equated with the corresponding &agr;1 in the recalculated left hand numerator.\n\nThe process is repeated for all the terms in the numerator.\n\nThe process is repeated for all the terms in the denumerator.\n\nClaims\n\n1. A signal processing system in which a differentially quantized digital signal is used, said digital signal being a ternary signal having a two bit combination which represents one of only three values, in which the particular value represented depends on the two bits of the combination, of which one value is zero and the other two values represent a positive value and a negative value respectively.\n\n2. A system according to claim 1, wherein said positive and negative values are symmetrical with respect to zero.\n\n3. A system according to claim 1, wherein said two bit combination of the ternary signal is either bits 00, 01, 10 or 11 and wherein said bits 11 of the ternary signal represent said positive value, said bits 00 of the ternary signal represent said negative value and said bits 01 or 10 of the ternary signal represent zero.\n\n4. A system according to claim 1, wherein said positive value represents decimal &plus;1, and said negative value represents decimal −1.\n\n5. A digital signal processing system according to claim 1, which is a digital audio signal processing system.\n\n6. A signal processing system in which a differentially quantized digital signal is used,\n\nsaid digital signal being a ternary signal having two bits which represent only three values, of which one value is zero and the other two values represent a positive value and a negative value respectively,\nsaid system including a quantizer having:\nan input for receiving an analogue signal;\nfirst and second outputs; and\nfirst and second comparators having respective outputs coupled to said first and second outputs,\nrespective first inputs each coupled to said input for receiving said analogue signal, and respective second inputs one receiving a reference level of half said positive value, and the other receiving a reference level of half said negative value,\neach comparator producing a binary indication of whether said analogue signal is more positive or more negative than the reference level applied to its second input, thereby to produce said ternary signal.\n\n7. A system according to claim 1, having an input for receiving said two bits of said ternary signal in parallel and a signal processor for processing said ternary signal.\n\n8. A system according to claim 7, wherein said processor comprises a Delta Sigma Modulator.\n\n9. A system according to claim 7, wherein said processor comprises a coefficient multiplier for multiplying said ternary signal by an n-bit coefficient where n is greater than one to produce an n bit product.\n\n10. A system according to claim 9, wherein said processor comprises a requantiser for requantising said n-bit product.\n\n11. A system according to claim 10, wherein said requantiser requantises the n-bit product as a 1-bit signal.\n\n12. A system according to claim 10, wherein said requantiser requantises said n-bit product as a ternary signal.\n\n13. A signal processing system in which a differentially quantized digital signal is used,\n\nsaid digital signal being a ternary signal having two bits which represent only three values, of which one value is zero and the other two values represent a positive value and a negative value respectively;\nthe system having an input for receiving said two bits of said ternary signal in parallel and a signal processor for processing said ternary signal;\nwherein said signal processor comprises a coefficient multiplier for multiplying said ternary signal by an n-bit coefficient, where n is greater than one, to produce an n bit product; and a requantiser for requantising said n-bit product as a ternary signal; and\nsaid requantizer selects the two most significant bits of said n bit signal product and inverts the most significant of said selected bits.\n\n14. A signal processing system in which a differentially quantized digital signal is used,\n\nsaid digital signal being a ternary signal having two bits which represent only three values, of which one value is zero and the other two values represent a positive value and a negative value respectively,\nthe system having an input for receiving said two bits of said ternary signal in parallel and a signal processor for processing said ternary signal;\nwherein said signal processor comprises a coefficient multiplier for multiplying said ternary signal by an n-bit coefficient where n is greater than one to produce an n bit product; and,\nsaid multiplier comprises, for each of said n bits of said coefficient, an Exclusive −OR gate having a first input for receiving said bit of said coefficient and a second input for receiving one of said bits of said ternary signal, and an AND gate having a first input for receiving the output of said Exclusive −OR gate and a second input for receiving an Exclusive −NOR combination of said two bits of said ternary signal.\n\n15. A system according to claim 14, wherein said processor further comprises a NOR gate for forming a NOR combination of said two bits of said ternary signal to detect bits 00, and an adder which adds the output of said NORgate to the output of said multiplier.\n\n16. A signal processing system comprising a plurality of Delta Sigma Modulators (DSMs) in cascade, at least one of the DSMs being arranged to receive an r bit signal and to output a q bit signal, where at least one of r and q is a ternary signal having a two bit combination which represents one of only three values, in which the particular value represented depends on the two bits of the combination, of which one value is zero, and the other two values represent a positive and a negative value respectively.\n\n17. A system according to claim 16, wherein said two bit combination of the ternary signal is either bits 00, 01, 10 or 11 and wherein said bits 11 of the ternary signal represent said positive value, said bits 00 of the ternary signal represent said negative value and said bits 01 or 10 of the ternary signal represent zero.\n\n18. A system according to claim 16, wherein said positive value represents decimal &plus;1, and said negative value represents decimal −1.\n\n19. A system according to claim 16, wherein said r bit signal is said ternary signal and said q bit signal is a 1-bit signal.\n\n20. A system according to claim 16, wherein said r bit signal is a 1-bit signal and said q-bit signal is said ternary signal.\n\n21. A system according to claim 16, wherein said positive and negative values are symmetrical with respect to zero.\n\n22. A digital signal processing system according to claim 16, which is a digital audio signal processing system.\n\n23. Recording apparatus for recording an input analog signal on a recording medium in digital form, comprising:\n\nan analog to digital (A/D) converter for converting the input analog signal into a differentially quantized digital signal, said digital signal being a ternary signal having two bits which together represent only three values, of which one value is zero and the other two values represent a positive value and a negative value, respectively; and,\nrecording means for recording the ternary signal on the recording medium.\n\n24. Recording apparatus according to claim 23, further comprising a signal processor coupled between said A/D converter and said recording means, for processing said ternary signal prior to the recording thereof by said recording means.\n\n25. Reproduction apparatus comprising:\n\nreproduction means for reproducing, from a recording medium, a ternary signal having two bits which together represent only three values, of which one value is zero and the other two values represent a positive value and a negative value, respectively, said ternary signal having been derived from an analog signal via differential quantization; and,\na digital to analog (D/A) converter for converting said reproduced ternary signal to analog form to reproduce said analog signal for output.\n\n26. Reproduction apparatus according to claim 25, further comprising a signal processor coupled between said reproduction means and said D/A converter, for processing said ternary signal prior to the D/A conversion thereof.\n\nPatent History\nPatent number: 6476752\nType: Grant\nFiled: May 4, 2000\nDate of Patent: Nov 5, 2002\nAssignee: Sony United Kingdom Limited (Weybridge)\nInventors: Peter Charles Eastty (Eynsham), James Andrew Scott Angus (Clifton)\nPrimary Examiner: Peguy JeanPierre\nAssistant Examiner: Jean Bruner Jeanglaude\nAttorney, Agent or Law Firms: Frommer Lawrence & Haug LLP, William S. Frommer, Glenn F. Savit\nApplication Number: 09/565,619" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9041754,"math_prob":0.96441793,"size":35175,"snap":"2019-43-2019-47","text_gpt3_token_len":8614,"char_repetition_ratio":0.1891331,"word_repetition_ratio":0.20092122,"special_character_ratio":0.24776119,"punctuation_ratio":0.09239364,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9836482,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T03:40:56Z\",\"WARC-Record-ID\":\"<urn:uuid:2c212e43-858e-4df6-9d27-6fda90124c9d>\",\"Content-Length\":\"101855\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9840146-2e26-4d7d-a847-53804e0fc5b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:93c11e08-cdd1-42c1-8ad0-5d498b698139>\",\"WARC-IP-Address\":\"34.228.101.70\",\"WARC-Target-URI\":\"https://patents.justia.com/patent/6476752\",\"WARC-Payload-Digest\":\"sha1:TFP76GTNYY436SO43TD63RQRJEI5FMMI\",\"WARC-Block-Digest\":\"sha1:ZKHHV2QLIOCFYUD6YQJHS477SGNSSXK3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986702077.71_warc_CC-MAIN-20191020024805-20191020052305-00469.warc.gz\"}"}
https://www.geeksforgeeks.org/output-of-java-programs-set-46-multi-threading/?ref=leftbar-rightbar
[ "Skip to content\nRelated Articles\nOutput of Java Programs | Set 46 (Multi-Threading)\n• Difficulty Level : Easy\n• Last Updated : 04 Oct, 2017\n\nPrerequisite : Multi-threading in Java\n\n1. What will be the output of the program?\n\n `class` `Test ``extends` `Thread {``public``    ``void` `run()``    ``{``        ``System.out.println(``\"Run\"``);``    ``}``} ``class` `Myclass {``public``    ``static` `void` `main(String[] args)``    ``{``        ``Test t = ``new` `Test();``        ``t.start();``    ``}``}`\n\nOptions:\n1. One thread created\n2. Two thread created\n3. Depend upon system\n4. No thread created\nOutput:\n\n` The answer is option (2)`\n\nExplanation : In the above program, one thread will be created i.e. the main thread which is responsible to execute the main() method and the child thread will be created after the execution of t.start() which is responsible to execute run() method.\n\n2. What will be the output of the program?\n\n `class` `Test ``extends` `Thread {``public``    ``void` `run()``    ``{``        ``System.out.println(``\"Run\"``);``    ``}``} ``class` `Myclass {``public``    ``static` `void` `main(String[] args)``    ``{``        ``Test t = ``new` `Test();``        ``t.run();``    ``}``}`\n\nOptions:\n1. One thread created\n2. Two thread created\n3. Depend upon system\n4. No thread created\n\nOutput:\n\n` The answer is option (1)`\n\nExplanation : In the above program only one thread will be created i.e. the main thread which is responsible to execute the main() method only. The run() method is called by the object t like a normal method.\n\n3. What will be the order of output of the program?\n\n `class` `Test ``extends` `Thread {``public``    ``void` `run()``    ``{``        ``System.out.println(``\"Run\"``);``    ``}``} ``class` `Myclass {``public``    ``static` `void` `main(String[] args)``    ``{``        ``Test t = ``new` `Test();``        ``t.start();``        ``System.out.println(``\"Main\"``);``    ``}``}`\n\nOptions:\n1. Main Run\n2. Run Main\n3. Depend upon Program\n4. Depend upon JVM\n\nOutput:\n\n` The answer is option (4)`\n\nExplanation : In the above program, we cant predict the exact order of the output as it is decided by the Thread scheduler which is the part of JVM.\n\n4. What will be the output of the program?\n\n `class` `Test ``implements` `Runnable {``public``    ``void` `run()``    ``{``        ``System.out.println(``\"Run\"``);``    ``}``} ``class` `Myclass {``public``    ``static` `void` `main(String[] args)``    ``{``        ``Test t = ``new` `Test();``        ``t.start();``        ``System.out.println(``\"Main\"``);``    ``}``}`\n\nOptions:\n1. Main Run\n2. Run Main\n3. Compile time error\n4. Depend upon JVM\n\nOutput:\n\n` The answer is option (3)`\n\nExplanation : In the above program, we will get compile time error because start() method is present in the Thread class only and we are implementing Runnable interface.\n\n5. What will be the output of the program?\n\n `class` `Test ``implements` `Runnable {``public``    ``void` `run()``    ``{``        ``System.out.println(``\"Run\"``);``    ``}``} ``class` `Myclass {``public``    ``static` `void` `main(String[] args)``    ``{``        ``Thread t1 = ``new` `Thread();``        ``t1.start();``        ``System.out.println(``\"Main\"``);``    ``}``}`\n\nOptions:\n1. Run\n2. Main\n3. Compile time error\n4. Run Main\n\nOutput:\n\n` The answer is option (2)`\n\nExplanation : In the above program, we are calling start() method of Thread class which is responsible to execute run() method of Thread class and Thread class run() method has empty implementation. That’s why one child thread will be created but it will not execute Test class run() method.\n\nThis article is contributed by Bishal Kumar Dubey. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.\n\nPlease write comments if you find anything incorrect, or you want to share more information about the topic discussed above.\n\nMy Personal Notes arrow_drop_up" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7476837,"math_prob":0.48983228,"size":3301,"snap":"2021-21-2021-25","text_gpt3_token_len":834,"char_repetition_ratio":0.12708522,"word_repetition_ratio":0.5140845,"special_character_ratio":0.2644653,"punctuation_ratio":0.16717325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98220277,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-25T14:17:16Z\",\"WARC-Record-ID\":\"<urn:uuid:94db02da-f9ca-4bcf-b23c-481f1c5110c1>\",\"Content-Length\":\"119714\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c258458a-3ab4-4e3c-9598-f7caac29ad1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7be7ef6-2762-4ed8-8075-52a4cf838d82>\",\"WARC-IP-Address\":\"104.98.115.161\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/output-of-java-programs-set-46-multi-threading/?ref=leftbar-rightbar\",\"WARC-Payload-Digest\":\"sha1:2Q3MRAN3I6O2CDJXBJA4IWFTP5M77RT2\",\"WARC-Block-Digest\":\"sha1:X27AVC6YGRUJML3ONQQDILTMLOTKAIB2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487630175.17_warc_CC-MAIN-20210625115905-20210625145905-00638.warc.gz\"}"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-3-applications-of-differentiation-review-true-false-quiz-page-285/1
[ "## Calculus 8th Edition\n\nWhen $f'(c)=0$ is only means that the tangent line to f at $x =c$ is horizontal. The function $f(x)=x^{3}$ has $f'(x)=2x^{2}$ $f'(0)=0$ but $f'(x)\\gt 0$ and increasing for all other $x$. In this case $x=c$ is an inflection point and not a local maximum or minimum. Hence, the given statement is false." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81614834,"math_prob":1.0000063,"size":332,"snap":"2019-51-2020-05","text_gpt3_token_len":110,"char_repetition_ratio":0.07926829,"word_repetition_ratio":0.0,"special_character_ratio":0.33734939,"punctuation_ratio":0.065789476,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99991155,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T12:53:54Z\",\"WARC-Record-ID\":\"<urn:uuid:af6a12f2-b466-4dee-a1d3-30de1b39a94c>\",\"Content-Length\":\"71647\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20e0d2e7-f0e7-472a-831e-8a21abda5d7a>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0c5d562-570f-43e6-9065-74edc5fe3e1c>\",\"WARC-IP-Address\":\"3.90.134.5\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-3-applications-of-differentiation-review-true-false-quiz-page-285/1\",\"WARC-Payload-Digest\":\"sha1:YTVI2M5LRHB5YQ2GEBM4IEB2YAUFEYUF\",\"WARC-Block-Digest\":\"sha1:PID5G5P4DS2SGE3Z2MK2KHGVKLFR6UDI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540555616.2_warc_CC-MAIN-20191213122716-20191213150716-00091.warc.gz\"}"}
https://scikit-learn.org/stable/modules/tree.html
[ "# 1.10. Decision Trees¶\n\nDecision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. A tree can be seen as a piecewise constant approximation.\n\nFor instance, in the example below, decision trees learn from data to approximate a sine curve with a set of if-then-else decision rules. The deeper the tree, the more complex the decision rules and the fitter the model.\n\nSome advantages of decision trees are:\n\n• Simple to understand and to interpret. Trees can be visualised.\n\n• Requires little data preparation. Other techniques often require data normalisation, dummy variables need to be created and blank values to be removed. Note however that this module does not support missing values.\n\n• The cost of using the tree (i.e., predicting data) is logarithmic in the number of data points used to train the tree.\n\n• Able to handle both numerical and categorical data. However scikit-learn implementation does not support categorical variables for now. Other techniques are usually specialised in analysing datasets that have only one type of variable. See algorithms for more information.\n\n• Able to handle multi-output problems.\n\n• Uses a white box model. If a given situation is observable in a model, the explanation for the condition is easily explained by boolean logic. By contrast, in a black box model (e.g., in an artificial neural network), results may be more difficult to interpret.\n\n• Possible to validate a model using statistical tests. That makes it possible to account for the reliability of the model.\n\n• Performs well even if its assumptions are somewhat violated by the true model from which the data were generated.\n\nThe disadvantages of decision trees include:\n\n• Decision-tree learners can create over-complex trees that do not generalise the data well. This is called overfitting. Mechanisms such as pruning, setting the minimum number of samples required at a leaf node or setting the maximum depth of the tree are necessary to avoid this problem.\n\n• Decision trees can be unstable because small variations in the data might result in a completely different tree being generated. This problem is mitigated by using decision trees within an ensemble.\n\n• Predictions of decision trees are neither smooth nor continuous, but piecewise constant approximations as seen in the above figure. Therefore, they are not good at extrapolation.\n\n• The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts. Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement.\n\n• There are concepts that are hard to learn because decision trees do not express them easily, such as XOR, parity or multiplexer problems.\n\n• Decision tree learners create biased trees if some classes dominate. It is therefore recommended to balance the dataset prior to fitting with the decision tree.\n\n## 1.10.1. Classification¶\n\nDecisionTreeClassifier is a class capable of performing multi-class classification on a dataset.\n\nAs with other classifiers, DecisionTreeClassifier takes as input two arrays: an array X, sparse or dense, of shape (n_samples, n_features) holding the training samples, and an array Y of integer values, shape (n_samples,), holding the class labels for the training samples:\n\n>>> from sklearn import tree\n>>> X = [[0, 0], [1, 1]]\n>>> Y = [0, 1]\n>>> clf = tree.DecisionTreeClassifier()\n>>> clf = clf.fit(X, Y)\n\n\nAfter being fitted, the model can then be used to predict the class of samples:\n\n>>> clf.predict([[2., 2.]])\narray()\n\n\nIn case that there are multiple classes with the same and highest probability, the classifier will predict the class with the lowest index amongst those classes.\n\nAs an alternative to outputting a specific class, the probability of each class can be predicted, which is the fraction of training samples of the class in a leaf:\n\n>>> clf.predict_proba([[2., 2.]])\narray([[0., 1.]])\n\n\nDecisionTreeClassifier is capable of both binary (where the labels are [-1, 1]) classification and multiclass (where the labels are [0, …, K-1]) classification.\n\nUsing the Iris dataset, we can construct a tree as follows:\n\n>>> from sklearn.datasets import load_iris\n>>> from sklearn import tree\n>>> X, y = load_iris(return_X_y=True)\n>>> clf = tree.DecisionTreeClassifier()\n>>> clf = clf.fit(X, y)\n\n\nOnce trained, you can plot the tree with the plot_tree function:\n\n>>> tree.plot_tree(clf)\n\n\nWe can also export the tree in Graphviz format using the export_graphviz exporter. If you use the conda package manager, the graphviz binaries and the python package can be installed with conda install python-graphviz.\n\nAlternatively binaries for graphviz can be downloaded from the graphviz project homepage, and the Python wrapper installed from pypi with pip install graphviz.\n\nBelow is an example graphviz export of the above tree trained on the entire iris dataset; the results are saved in an output file iris.pdf:\n\n>>> import graphviz\n>>> dot_data = tree.export_graphviz(clf, out_file=None)\n>>> graph = graphviz.Source(dot_data)\n>>> graph.render(\"iris\")\n\n\nThe export_graphviz exporter also supports a variety of aesthetic options, including coloring nodes by their class (or value for regression) and using explicit variable and class names if desired. Jupyter notebooks also render these plots inline automatically:\n\n>>> dot_data = tree.export_graphviz(clf, out_file=None,\n... feature_names=iris.feature_names,\n... class_names=iris.target_names,\n... filled=True, rounded=True,\n... special_characters=True)\n>>> graph = graphviz.Source(dot_data)\n>>> graph", null, "Alternatively, the tree can also be exported in textual format with the function export_text. This method doesn’t require the installation of external libraries and is more compact:\n\n>>> from sklearn.datasets import load_iris\n>>> from sklearn.tree import DecisionTreeClassifier\n>>> from sklearn.tree import export_text\n>>> iris = load_iris()\n>>> decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)\n>>> decision_tree = decision_tree.fit(iris.data, iris.target)\n>>> r = export_text(decision_tree, feature_names=iris['feature_names'])\n>>> print(r)\n|--- petal width (cm) <= 0.80\n| |--- class: 0\n|--- petal width (cm) > 0.80\n| |--- petal width (cm) <= 1.75\n| | |--- class: 1\n| |--- petal width (cm) > 1.75\n| | |--- class: 2\n\n\n## 1.10.2. Regression¶\n\nDecision trees can also be applied to regression problems, using the DecisionTreeRegressor class.\n\nAs in the classification setting, the fit method will take as argument arrays X and y, only that in this case y is expected to have floating point values instead of integer values:\n\n>>> from sklearn import tree\n>>> X = [[0, 0], [2, 2]]\n>>> y = [0.5, 2.5]\n>>> clf = tree.DecisionTreeRegressor()\n>>> clf = clf.fit(X, y)\n>>> clf.predict([[1, 1]])\narray([0.5])\n\n\nExamples:\n\n## 1.10.3. Multi-output problems¶\n\nA multi-output problem is a supervised learning problem with several outputs to predict, that is when Y is a 2d array of shape (n_samples, n_outputs).\n\nWhen there is no correlation between the outputs, a very simple way to solve this kind of problem is to build n independent models, i.e. one for each output, and then to use those models to independently predict each one of the n outputs. However, because it is likely that the output values related to the same input are themselves correlated, an often better way is to build a single model capable of predicting simultaneously all n outputs. First, it requires lower training time since only a single estimator is built. Second, the generalization accuracy of the resulting estimator may often be increased.\n\nWith regard to decision trees, this strategy can readily be used to support multi-output problems. This requires the following changes:\n\n• Store n output values in leaves, instead of 1;\n\n• Use splitting criteria that compute the average reduction across all n outputs.\n\nThis module offers support for multi-output problems by implementing this strategy in both DecisionTreeClassifier and DecisionTreeRegressor. If a decision tree is fit on an output array Y of shape (n_samples, n_outputs) then the resulting estimator will:\n\n• Output n_output values upon predict;\n\n• Output a list of n_output arrays of class probabilities upon predict_proba.\n\nThe use of multi-output trees for regression is demonstrated in Multi-output Decision Tree Regression. In this example, the input X is a single real value and the outputs Y are the sine and cosine of X.\n\nThe use of multi-output trees for classification is demonstrated in Face completion with a multi-output estimators. In this example, the inputs X are the pixels of the upper half of faces and the outputs Y are the pixels of the lower half of those faces.\n\nReferences:\n\n## 1.10.4. Complexity¶\n\nIn general, the run time cost to construct a balanced binary tree is $$O(n_{samples}n_{features}\\log(n_{samples}))$$ and query time $$O(\\log(n_{samples}))$$. Although the tree construction algorithm attempts to generate balanced trees, they will not always be balanced. Assuming that the subtrees remain approximately balanced, the cost at each node consists of searching through $$O(n_{features})$$ to find the feature that offers the largest reduction in entropy. This has a cost of $$O(n_{features}n_{samples}\\log(n_{samples}))$$ at each node, leading to a total cost over the entire trees (by summing the cost at each node) of $$O(n_{features}n_{samples}^{2}\\log(n_{samples}))$$.\n\n## 1.10.5. Tips on practical use¶\n\n• Decision trees tend to overfit on data with a large number of features. Getting the right ratio of samples to number of features is important, since a tree with few samples in high dimensional space is very likely to overfit.\n\n• Consider performing dimensionality reduction (PCA, ICA, or Feature selection) beforehand to give your tree a better chance of finding features that are discriminative.\n\n• Understanding the decision tree structure will help in gaining more insights about how the decision tree makes predictions, which is important for understanding the important features in the data.\n\n• Visualise your tree as you are training by using the export function. Use max_depth=3 as an initial tree depth to get a feel for how the tree is fitting to your data, and then increase the depth.\n\n• Remember that the number of samples required to populate the tree doubles for each additional level the tree grows to. Use max_depth to control the size of the tree to prevent overfitting.\n\n• Use min_samples_split or min_samples_leaf to ensure that multiple samples inform every decision in the tree, by controlling which splits will be considered. A very small number will usually mean the tree will overfit, whereas a large number will prevent the tree from learning the data. Try min_samples_leaf=5 as an initial value. If the sample size varies greatly, a float number can be used as percentage in these two parameters. While min_samples_split can create arbitrarily small leaves, min_samples_leaf guarantees that each leaf has a minimum size, avoiding low-variance, over-fit leaf nodes in regression problems. For classification with few classes, min_samples_leaf=1 is often the best choice.\n\nNote that min_samples_split considers samples directly and independent of sample_weight, if provided (e.g. a node with m weighted samples is still treated as having exactly m samples). Consider min_weight_fraction_leaf or min_impurity_decrease if accounting for sample weights is required at splits.\n\n• Balance your dataset before training to prevent the tree from being biased toward the classes that are dominant. Class balancing can be done by sampling an equal number of samples from each class, or preferably by normalizing the sum of the sample weights (sample_weight) for each class to the same value. Also note that weight-based pre-pruning criteria, such as min_weight_fraction_leaf, will then be less biased toward dominant classes than criteria that are not aware of the sample weights, like min_samples_leaf.\n\n• If the samples are weighted, it will be easier to optimize the tree structure using weight-based pre-pruning criterion such as min_weight_fraction_leaf, which ensure that leaf nodes contain at least a fraction of the overall sum of the sample weights.\n\n• All decision trees use np.float32 arrays internally. If training data is not in this format, a copy of the dataset will be made.\n\n• If the input matrix X is very sparse, it is recommended to convert to sparse csc_matrix before calling fit and sparse csr_matrix before calling predict. Training time can be orders of magnitude faster for a sparse matrix input compared to a dense matrix when features have zero values in most of the samples.\n\n## 1.10.6. Tree algorithms: ID3, C4.5, C5.0 and CART¶\n\nWhat are all the various decision tree algorithms and how do they differ from each other? Which one is implemented in scikit-learn?\n\nID3 (Iterative Dichotomiser 3) was developed in 1986 by Ross Quinlan. The algorithm creates a multiway tree, finding for each node (i.e. in a greedy manner) the categorical feature that will yield the largest information gain for categorical targets. Trees are grown to their maximum size and then a pruning step is usually applied to improve the ability of the tree to generalise to unseen data.\n\nC4.5 is the successor to ID3 and removed the restriction that features must be categorical by dynamically defining a discrete attribute (based on numerical variables) that partitions the continuous attribute value into a discrete set of intervals. C4.5 converts the trained trees (i.e. the output of the ID3 algorithm) into sets of if-then rules. These accuracy of each rule is then evaluated to determine the order in which they should be applied. Pruning is done by removing a rule’s precondition if the accuracy of the rule improves without it.\n\nC5.0 is Quinlan’s latest version release under a proprietary license. It uses less memory and builds smaller rulesets than C4.5 while being more accurate.\n\nCART (Classification and Regression Trees) is very similar to C4.5, but it differs in that it supports numerical target variables (regression) and does not compute rule sets. CART constructs binary trees using the feature and threshold that yield the largest information gain at each node.\n\nscikit-learn uses an optimised version of the CART algorithm; however, scikit-learn implementation does not support categorical variables for now.\n\n## 1.10.7. Mathematical formulation¶\n\nGiven training vectors $$x_i \\in R^n$$, i=1,…, l and a label vector $$y \\in R^l$$, a decision tree recursively partitions the feature space such that the samples with the same labels or similar target values are grouped together.\n\nLet the data at node $$m$$ be represented by $$Q_m$$ with $$N_m$$ samples. For each candidate split $$\\theta = (j, t_m)$$ consisting of a feature $$j$$ and threshold $$t_m$$, partition the data into $$Q_m^{left}(\\theta)$$ and $$Q_m^{right}(\\theta)$$ subsets\n\n\\begin{align}\\begin{aligned}Q_m^{left}(\\theta) = \\{(x, y) | x_j <= t_m\\}\\\\Q_m^{right}(\\theta) = Q_m \\setminus Q_m^{left}(\\theta)\\end{aligned}\\end{align}\n\nThe quality of a candidate split of node $$m$$ is then computed using an impurity function or loss function $$H()$$, the choice of which depends on the task being solved (classification or regression)\n\n$G(Q_m, \\theta) = \\frac{N_m^{left}}{N_m} H(Q_m^{left}(\\theta)) + \\frac{N_m^{right}}{N_m} H(Q_m^{right}(\\theta))$\n\nSelect the parameters that minimises the impurity\n\n$\\theta^* = \\operatorname{argmin}_\\theta G(Q_m, \\theta)$\n\nRecurse for subsets $$Q_m^{left}(\\theta^*)$$ and $$Q_m^{right}(\\theta^*)$$ until the maximum allowable depth is reached, $$N_m < \\min_{samples}$$ or $$N_m = 1$$.\n\n### 1.10.7.1. Classification criteria¶\n\nIf a target is a classification outcome taking on values 0,1,…,K-1, for node $$m$$, let\n\n$p_{mk} = 1/ N_m \\sum_{y \\in Q_m} I(y = k)$\n\nbe the proportion of class k observations in node $$m$$. If $$m$$ is a terminal node, predict_proba for this region is set to $$p_{mk}$$. Common measures of impurity are the following.\n\nGini:\n\n$H(Q_m) = \\sum_k p_{mk} (1 - p_{mk})$\n\nEntropy:\n\n$H(Q_m) = - \\sum_k p_{mk} \\log(p_{mk})$\n\nMisclassification:\n\n$H(Q_m) = 1 - \\max(p_{mk})$\n\n### 1.10.7.2. Regression criteria¶\n\nIf the target is a continuous value, then for node $$m$$, common criteria to minimize as for determining locations for future splits are Mean Squared Error (MSE or L2 error), Poisson deviance as well as Mean Absolute Error (MAE or L1 error). MSE and Poisson deviance both set the predicted value of terminal nodes to the learned mean value $$\\bar{y}_m$$ of the node whereas the MAE sets the predicted value of terminal nodes to the median $$median(y)_m$$.\n\nMean Squared Error:\n\n\\begin{align}\\begin{aligned}\\bar{y}_m = \\frac{1}{N_m} \\sum_{y \\in Q_m} y\\\\H(Q_m) = \\frac{1}{N_m} \\sum_{y \\in Q_m} (y - \\bar{y}_m)^2\\end{aligned}\\end{align}\n\nHalf Poisson deviance:\n\n$H(Q_m) = \\frac{1}{N_m} \\sum_{y \\in Q_m} (y \\log\\frac{y}{\\bar{y}_m} - y + \\bar{y}_m)$\n\nSetting criterion=\"poisson\" might be a good choice if your target is a count or a frequency (count per some unit). In any case, $$y >= 0$$ is a necessary condition to use this criterion. Note that it fits much slower than the MSE criterion.\n\nMean Absolute Error:\n\n\\begin{align}\\begin{aligned}median(y)_m = \\underset{y \\in Q_m}{\\mathrm{median}}(y)\\\\H(Q_m) = \\frac{1}{N_m} \\sum_{y \\in Q_m} |y - median(y)_m|\\end{aligned}\\end{align}\n\nNote that it fits much slower than the MSE criterion.\n\n## 1.10.8. Minimal Cost-Complexity Pruning¶\n\nMinimal cost-complexity pruning is an algorithm used to prune a tree to avoid over-fitting, described in Chapter 3 of [BRE]. This algorithm is parameterized by $$\\alpha\\ge0$$ known as the complexity parameter. The complexity parameter is used to define the cost-complexity measure, $$R_\\alpha(T)$$ of a given tree $$T$$:\n\n$R_\\alpha(T) = R(T) + \\alpha|\\widetilde{T}|$\n\nwhere $$|\\widetilde{T}|$$ is the number of terminal nodes in $$T$$ and $$R(T)$$ is traditionally defined as the total misclassification rate of the terminal nodes. Alternatively, scikit-learn uses the total sample weighted impurity of the terminal nodes for $$R(T)$$. As shown above, the impurity of a node depends on the criterion. Minimal cost-complexity pruning finds the subtree of $$T$$ that minimizes $$R_\\alpha(T)$$.\n\nThe cost complexity measure of a single node is $$R_\\alpha(t)=R(t)+\\alpha$$. The branch, $$T_t$$, is defined to be a tree where node $$t$$ is its root. In general, the impurity of a node is greater than the sum of impurities of its terminal nodes, $$R(T_t)<R(t)$$. However, the cost complexity measure of a node, $$t$$, and its branch, $$T_t$$, can be equal depending on $$\\alpha$$. We define the effective $$\\alpha$$ of a node to be the value where they are equal, $$R_\\alpha(T_t)=R_\\alpha(t)$$ or $$\\alpha_{eff}(t)=\\frac{R(t)-R(T_t)}{|T|-1}$$. A non-terminal node with the smallest value of $$\\alpha_{eff}$$ is the weakest link and will be pruned. This process stops when the pruned tree’s minimal $$\\alpha_{eff}$$ is greater than the ccp_alpha parameter.\n\nReferences:\n\nBRE\n\nL. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth, Belmont, CA, 1984." ]
[ null, "https://scikit-learn.org/stable/_images/iris.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85859483,"math_prob":0.9927153,"size":18976,"snap":"2021-04-2021-17","text_gpt3_token_len":4297,"char_repetition_ratio":0.1281362,"word_repetition_ratio":0.029482344,"special_character_ratio":0.23166105,"punctuation_ratio":0.12557736,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992706,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-15T14:38:21Z\",\"WARC-Record-ID\":\"<urn:uuid:6c3d959f-3fe3-4413-b1e7-c99634857c0d>\",\"Content-Length\":\"62185\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25a68bf4-ac15-429a-80f9-590e4a6327cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2ed6f3b-5413-43d9-9678-62cbff1d5f05>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://scikit-learn.org/stable/modules/tree.html\",\"WARC-Payload-Digest\":\"sha1:2LIUCGN47X32LD4SQ2C66FMLHZYAG4HE\",\"WARC-Block-Digest\":\"sha1:FJ6VUSRMSG43UGSRX5RFS2AOYBIO5BP4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038085599.55_warc_CC-MAIN-20210415125840-20210415155840-00564.warc.gz\"}"}
https://medium.com/geekculture/intro-to-algorithms-two-pointers-technique-b37f962eab5?source=post_internal_links---------5----------------------------
[ "# Intro to Algorithms: Two Pointers Technique\n\n## A detailed primer on an essential technique\n\nIn my last post, I covered the ins and outs of the Binary Search algorithm. The binary search method assumes that the collection you are browsing has already been sorted, and the two pointers technique is no different. However, whereas binary search locates a single element within a collection, the two pointers technique is used to locate two elements within an array that together meet a condition. Let’s take a look at how it works.\n\n# Charging Ahead: A Naive Solution\n\nLet us turn to a brute force, naive solution to a problem where the two pointers technique might be better suited.\n\nGiven an array of integers (already sorted in ascending order) find and return the indices of the two elements that, when added together, are equal to the provided target value.\n\nOr given an `array` and a `target`, write an algorithm that solves for `array[i] + array[j] === target`.\n\nHow might we devise a brute force approach to this problem? Well, the first step would be to iterate through each element in the array. Then we would have to check each selected element with the remaining elements in our collection, testing if their sum matches the target value. Sounds like a job for some nested loops.\n\nThis solution does technically work, but it’s woefully inefficient. You have to iterate through the entire array for each element in the array until the solution is found! This might not sound terrible if the array is three or four elements long. But imagine doing this with an array that’s one million elements long. You’d have to count to one million (technically 999,999) before you’d even progress to the second element! Think about how long that would take if our solution was to be found in the last two elements in the array.\n\nThis method has a Big O notation of O(n²), meaning that as the size of our array grows, the time it takes to find our answer grows exponentially. Not ideal at all. How might we devise a better solution?\n\n# Implementing Two Pointers\n\nLet’s take a second look at the problem’s description:\n\nGiven an array of integers (already sorted in ascending order) find and return the indices of the two elements that, when added together, are equal to the provided target value.\n\nHow can we leverage this information to construct a more efficient algorithm? With this knowledge, we know that incrementing the index always increases the current element's value, and decrementing the index will always do the opposite.\n\nIn the previous solution, we were always incrementing indices, with our second pointer, `j`, given the value of `i + 1` and then iterating through the remaining elements. Instead, we can have `j` point to the last element of the array and add in some conditional logic.\n\n# Approaching the Base Case\n\nOur conditional logic will need to identify our base case(s)—the terminating scenario(s) that exits our function. That’s pretty simple with our current example.\n\nGiven an array of integers (already sorted in ascending order) find and return the indices of the two elements that, when added together, are equal to the provided target value.\n\nSo our base case is `array[i] + array[j] === target`. When that condition is met we can exit our function and return `[i, j]` as the indices that solve our problem. How might we navigate from our position in the previous code example towards that base condition?\n\nThere are two ways that we can fail to satisfy the conditions for our base case. The sum of `array[i]` and `array[j]` will either be greater than or less than our target value. Combine that with what we know about incrementing and decrementing indices and we’ve got all we need to solve our problem.\n\nIf the sum of the two elements is greater than the target, decrementing one of the indices will bring us closer to our goal. The inverse is also true. If the sum of the two elements is less than the target, incrementing one of the indices will steer us in the right direction. Which index should we choose for which operation?\n\nOur first pointer, `i`, starts out at 0, so it’d be pretty hard to decrement it any further (let’s just ignore negative indices for now, as they’re not at all relevant or helpful with our current problem). Similarly, our second pointer, `j`, starts out at `array.length — 1` or the last possible index. If we increment `j` we end up pointing to an element that does not exist! No Bueno.\n\n# Behold The Two Pointers\n\nLooking over our completed algorithm, you can clearly see that it’s significantly more efficient than a brute force approach. At worst, the function will iterate over every element in the given array, giving this function O(n) time complexity. That means our improved function is exponentially faster when compared to the O(n²) complexity of a brute force approach. Quite an improvement!\n\n# Recap\n\nTo review, the two pointers technique is useful when searching for a pair of elements within an array that meet a specific condition (i.e. base case). We then evaluate the ways that the two elements can fail to meet the base condition and write conditional logic that will get us closer to our goal.\n\nThe two pointers technique is not an algorithm but rather a technique, meaning there are a variety of ways it can be used. The underlying principle, however, is an important one. Using two (or more) pointers allows us to traverse and process data much more efficiently than a brute force approach. What other scenarios might this technique be found useful? How else could you write the solution provided above? (Perhaps the focus of future blog posts?)\n\n`Resources:Two Sum II - Leetcode Challenge #167The inspiration for this write upTwo Pointers Technique - Geeks for GeeksGrokking Algorithms: An Illustrated Guide for Programmers and Other Curious People`\n\n## Geek Culture\n\nA new tech publication by Start it up (https://medium.com/swlh).\n\nWritten by\n\n## Garrett Bodley\n\nFull Stack web developer. Former Creative. Amateur baker, barista, and cat dad. he/him", null, "## Geek Culture\n\nA new tech publication by Start it up (https://medium.com/swlh)." ]
[ null, "https://miro.medium.com/fit/c/80/80/1*bWAVaFQmpmU6ePTjNIje_A.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8939458,"math_prob":0.90944153,"size":5907,"snap":"2021-31-2021-39","text_gpt3_token_len":1226,"char_repetition_ratio":0.13010333,"word_repetition_ratio":0.10573123,"special_character_ratio":0.20382597,"punctuation_ratio":0.094024606,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.961623,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-31T22:16:13Z\",\"WARC-Record-ID\":\"<urn:uuid:72aa7376-725d-43c6-8889-9c03d638ad7a>\",\"Content-Length\":\"139639\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6756441e-7200-4581-8087-df255ecac095>\",\"WARC-Concurrent-To\":\"<urn:uuid:e14b0b53-583e-4571-886b-26cbca0a877c>\",\"WARC-IP-Address\":\"162.159.153.4\",\"WARC-Target-URI\":\"https://medium.com/geekculture/intro-to-algorithms-two-pointers-technique-b37f962eab5?source=post_internal_links---------5----------------------------\",\"WARC-Payload-Digest\":\"sha1:WUYCIWXL5SW2QQMF76QFKVHVJ4MAS2WK\",\"WARC-Block-Digest\":\"sha1:6AIAO3Y4IWTQKGO4WR7WRUS2KILFYAYW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154126.73_warc_CC-MAIN-20210731203400-20210731233400-00454.warc.gz\"}"}
https://gist.github.com/guyabel/9574156
[ "\\documentclass[11pt]{report} \\usepackage[a4paper,margin=2cm, bindingoffset=2cm]{geometry} \\usepackage{appendix} \\usepackage{amsmath} \\usepackage{booktabs} \\usepackage{threeparttable} \\usepackage{natbib} \\bibliographystyle{chicago} %to stop orphan lines \\widowpenalty=10000 \\clubpenalty=10000 \\raggedbottom %line spacing \\linespread{1.3} \\begin{document} \\begin{titlepage} \\begin{center} \\large \\textsc{University of Southampton} \\\\ \\ \\\\ \\textsc{Faculty of Law, Arts \\& Social Sciences} \\\\ \\ \\\\ \\textsc{School of Social Sciences}\\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\Huge \\textbf{International Migration Flow Table Estimation} \\ \\\\ \\ \\\\ \\large by \\ \\\\ \\ \\\\ Guy J. Abel \\vfill Thesis for the degree of Doctor of Philosophy \\\\ \\ \\\\ \\ \\\\ April 2009 \\end{center} \\end{titlepage} \\begin{titlepage} \\begin{center} \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ \\ \\\\ To Ed and Diana \\end{center} \\end{titlepage} %Roman Page Numbering \\pagenumbering{roman} \\chapter*{{Abstract}\\markboth{Acknowledgements}{Acknowledgements}} \\addcontentsline{toc}{chapter}{Abstract} A methodology is developed to estimate comparable international migration flows between a set of countries. International migration flow data may be missing, reported by the sending country, reported by the receiving country or reported by both the sending and receiving countries. For the last situation, reported counts rarely match due to differences in definitions and data collection systems. In this thesis, reported counts are harmonized using correction factors estimated from a constrained optimization procedure. Factors are applied to scale data known to be of a reliable standard, creating an incomplete migration flow table of harmonized values. Cells for which no reliable reported flows exist are then estimated from a negative binomial regression model fitted using the Expectation-Maximization (EM) type algorithm. Covariate information for this model is drawn from international migration theory. Finally, measures of precision for all missing cell estimates are derived using the Supplemented EM algorithm. Recent data on international migration between countries in Europe are used to illustrate the methodology. The results represent a complete table of comparable flows that can be used by regional policy makers and social scientist alike to better understand population behaviour and change.\\\\ \\tableofcontents \\listoffigures \\addcontentsline{toc}{chapter}{List of Figures} \\listoftables \\addcontentsline{toc}{chapter}{List of Tabes} \\chapter*{Declaration Of Authorship} \\addcontentsline{toc}{chapter}{Declaration Of Authorship} I, Guy Jonathan Abel, declare that the thesis entitled International Migration Flow Table Estimation and the work presented in the thesis are both my own, and have been generated by me as the result of my own original research. I confirm that: \\begin{itemize} \\item this work was done wholly or mainly while in candidature for a research degree at this University; \\item where any part of this thesis has previously been submitted for a degree or any other qualification at this University or any other institution, this has been clearly stated; \\item where I have consulted the published work of others, this is always clearly attributed; \\item where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work; \\item I have acknowledged all main sources of help; \\item where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself; \\end{itemize} \\vspace{2cm} Signed: \\dotfill \\vspace{2cm} \\newline \\noindent Date %Include Acknowledgements in TOC \\chapter*{Acknowledgements} \\addcontentsline{toc}{chapter}{Acknowledgements} This work was undertaken with financial support of the Economic and Social Research Council (PTA No-031-2004-00...). This thesis would not have been possible without the support of many people. I would like to express my sincere gratitude to: \\begin{itemize} \\item My supervisors, James Raymer and Peter Smith, who were abundantly helpful and offered invaluable advice, assistance and support throughout my studies in Southampton. \\item ... \\end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\chapter{Introduction} %Arabic Page Numbering \\pagenumbering{arabic} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Migration flow data inform policy makers, the media and academic community to the level and direction of population movements. ... \\section{International Migration Data} Migration can be measured as either a flow or stock. ... \\section{International Migration Flow Tables} Data on migration between a set of regions are commonly presented in a square table with off diagonal entries containing the number of people moving from any given origin to any given destination. ... \\section{Thesis Aims and Scope} The study of transition patterns, such as migration flows, generally involves three steps \\citep{rogers1980imm}. ... \\section{Thesis Structure} The study is structured in seven chapters. ... %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\chapter{Statistical Modelling} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\section{Introduction} This chapter outlines the statistical modelling techniques for migration flow tables to set the stage for future chapters. ... \\section{Regression Models} In many scientific studies, interest lies in the relationship between two or more observable quantities. ... \\section{Generalized Linear Models} Linear regression models are part of a range of statistical models, known as generalized linear models \\citep{nelder1972glm}. \\subsection{Normal and Log-Normal Distribution} In a continuous case, the response variable can be assumed to be independently normally distributed with parameters $( \\mu_{i} ,\\sigma ^{2})$ for the mean and variance respectively. \\subsection{Poisson Distribution} In a discrete case, a response variable of count data can be assumed to have a Poisson distribution with rate parameter $\\mu$. \\section{Fitting Generalized Linear Models} Maximum likelihood estimates are frequently used in migration models as they posses very desirable asymptotic properties such as consistency, asymptotic normality and asymptotic robustness \\citep[p457-69]{sen1995gms}. \\subsection{Mean and Variance} The mean and variance of the random component in a generalized linear model may be obtained in a general form, allowing the maximum likelihood estimates to be found using IRLS. \\subsection{Likelihood Equations} In order to obtain maximum likelihood parameter estimates for a generalized linear model we must first obtain the likelihood equations. \\subsection{Asymptotic Variance-Covariance Matrix of Parameters Estimates} The asymptotic variance-covariance matrix for parameter estimates is required to provide a useful simplification in the IRLS procedure. \\subsection{Iterative Reweighted Least Squares} For the likelihood equations of a classic linear regression model the maximum likelihood estimators of $\\boldsymbol\\beta$ can be found by re-expressing (\\ref{eqn:norleq}) for $\\boldsymbol\\beta$, in a matrix notation: \\section{Negative Binomial Regression Models} The negative binomial distribution has two-parameters that allow a mean and variance to be fitted separately, as opposed to a single parameter Poisson regression model. \\subsection{Asymptotic Variance Covariance Matrix} \\citet[p71]{cameron1998rac} showed that for the negative binomial regression model the maximum likelihood estimates are the solution to the first order conditions \\subsection{Fitting Negative Binomial Regression Model}\\label{sec:nbreg} \\citet[p560-1]{agresti2002cda} noted that a negative binomial model may be fitted in a similar manner as Poisson regression models when the dispersion parameter is known. \\section{Statistical Modelling of Missing Data} International migration flow data is often missing. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\chapter{A Review of Methodologies for Estimating an International Migration Flow Table of Comparable Data} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\section{Introduction} At present, the responsibility for the collection of international migration flow data rests with individual national statistics institutes. ... \\section{Problems of Comparability in International Migration Flow Data} The lack of comparability in international migration data can be traced to the multi-dimensional nature of migration \\citep{goldstein1976frr}. ... \\subsection{Data Production Techniques} Differences in the production of migration flow statistics can be derived from distinctive data collection methods and definitional measurements used by national statistics institutes. ... \\section{International Migration Flow Tables} Migration data are commonly represented in square tables, with off diagonal entries containing the number of people moving from any given origin $i$, to any given destination $j$, in a single time period. ... \\section{Constrained Optimization} Estimates of a complete migration flow table between 28 European nations in 2004 were calculated by \\cite{poulain2007mim} (and \\cite{poulain2008emm}) as part of the MIMOSA project. ... \\section{Model Component Modelling} A multiplicative component approach was applied by \\cite{raymer2007eim} to estimate international migration flows between ten countries in Northern Europe in 1999. ... \\section{Discussion of Frameworks} \\label{sec:predis} Discussion on the presented frameworks and possible extensions is undertaken in the succeeding subsections. ... \\subsection{Constrained Optimization} The framework proposed by \\cite{poulain1993csm} was the first effort to estimate an international migration flow table of comparable data. ... \\subsection{Model Component Modelling} The multiplicative component methodology of \\cite{raymer2007eim} decomposes a flow table into a number of model parameters whose values are estimated using statistical models. ... \\section{Summary and Conclusion} The methodologies presented in this chapter take vastly different approaches to estimating a complete migration table. ... %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\chapter{Overcoming Inconsistencies in International Migration Flow Tables} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\section{Introduction} The lack of comparability in international migration data can be traced to the multi-dimensional nature of migration \\citep{goldstein1976frr}. ... \\section{International Migration Flow Data for the EU15} International migration flow data may be obtained from a number of international organizations. ... \\subsection{Ratings of Migration Data for EU15} \\begin{table}[h] %\\begin{center} \\caption{\\cite{erf2007mim} Ratings of Migration Data for EU15 from 2002 to 2006} \\label{tab:erfrat} \\begin{tabular}{ccccccc} Country & \\multicolumn{3}{c}{Receiving} & \\multicolumn{3}{c}{Sending} \\\\ & Timing & Completeness & Accuracy & Timing & Completeness & Accuracy \\\\ \\toprule AUT & 3 & 4 & 4 & 3 & 4 & 4 \\\\ BEL & 3 & 9 & 9 & 3 & 9 & 9 \\\\ DNK & 2(3) & 4(4) & 4(4) & 3 & 4 & 4 \\\\ FIN & 2(4) & 4(4) & 4(4) & 4 & 4 & 4 \\\\ FRA & 3 & 2 & 9 & & & \\\\ DEU & 2 & 4 & 4 & 2 & 4 & 4 \\\\ GRC & & & & & & \\\\ IRL & 2 & 2 & 2 & 2 & 2 & 2 \\\\ ITA & 2(3) & 3(3) & 3(3) & 4 & 3 & 3 \\\\ LUX & 2 & 3 & 3 & 2 & 3 & 3 \\\\ NLD & 3 & 4 & 4 & 4 & 4 & 4 \\\\ PRT & 4 & 9 & 9 & 3 & 2 & 2 \\\\ ESP & 2 & 3 & 3 & 2 & 3 & 3 \\\\ SWE & 4 & 4 & 4 & 4 & 4 & 4 \\\\ GBR & 4 & 2 & 2 & 4 & 2 & 2 \\\\ \\bottomrule \\end{tabular} \\footnotesize \\begin{tablenotes} \\item[] 0:Worst 1:Worse 2:Insufficient 3:Reasonable 4:Good 5:Excellent 9:Unknown \\item[] Scores in parentheses are for non-national, when national and non-national data are collected differently. \\end{tablenotes} \\end{table} In order to obtain a comparison of the European migration flow data, \\cite{erf2007mim} provided subjective judgements by three characteristics: definitions of migration, measurement systems and intended coverage. ... \\subsection{Data Dissemination in the EU15} Plots of the available counts of migrants with unknown origins or destinations, as a proportion of total sending and receiving countries, are shown in Figure \\ref{fig:harmunk} for EU15 nations between 2002 and 2006. ... \\section{Methodology for Creating Comparable Data from Reliable Data Sources} In this section, a general methodology that allows the estimation of incomplete international migration flow tables is described. ... \\subsection{Counts of Unknown Migrant Origins and Destinations} As previously discussed, international migration flow data are accompanied by a count of migrants with unknown origins or destinations. ... \\subsection{Constrained Optimization} Differences in counts between nations with better quality data can be considered as fixed, where data production techniques do change over time. ... \\section{Estimating Comparable Data from Reliable Data Sources} In order to estimate comparable data from reliable data sources, reported counts are adjusted for unknowns produced in the dissemination of data by national statistics institutes. ... \\subsection{Correction for Unknown Counts} All unknown counts, displayed in Figure \\ref{fig:harmunk} are distributed to origins and destinations using the equations in (\\ref{eqn:unksca}). ... \\subsection{Comparison of Distance Measures} \\label{sec:optdis} Alternative distance functions, to the Chi-Squared distance measure, could provide more stable correction factors over time, and hence better reflect the assumption that data collection methods and definitions remain constant. ... \\subsection{Constrained Optimization Over Time} For the distance measure associated with the smallest variance, a new set of time constant correction factors $(\\mathbf{r},\\mathbf{s})$ are estimated. ... \\section{Summary and Conclusion} In this chapter a methodology for the harmonization of data for international migration flows tables was outlined. ... %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\chapter{Estimating Missing Data in International Migration Flow Tables} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\section{Introduction} In this chapter, model based imputations for missing data in flow tables are derived. ... \\section{Models for Migration Flow Tables} \\cite{flowerdew1991prm} outlined two main approaches to the analysis of flow tables that are commonly used for internal mobility data: the gravity model and the spatial interaction model. ... \\section{The Expectation-Maximization (EM) Algorithm} \\label{sec:em} The EM algorithm is an iterative algorithm for maximum likelihood estimation in incomplete data problems. ... \\section{Modelling Incomplete International Migration Flow Tables} \\label{sec:covadd} In this section, negative binomial regression models are fitted to incomplete international migration flow data for the EU15 countries, presented in Figure \\ref{fig:harmfn}. ... \\subsection{Additional Information} \\label{sec:covdis} In order to provide more reasonable imputations, the quasi-independent model was expanded upon. ... \\subsection{Main Effects Model} In order to attain a better model fit and more realistic imputation the Akaike Information Criterion (AIC) was used to select the most suitable variables for a main effects model. ... \\subsection{Interaction Models} To gain a further superior fit the \\texttt{stepAIC} function was run once more with an extended scope of models to consider all two-way interactions, with one exemption, the origin-destination interaction. ... \\section{Summary and Discussion} In this chapter, a complete set of estimates of international migration flow tables are created, using a spatial interaction model fitted using the EM algorithm on the harmonized flows from the previous chapter. ... %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\chapter{Estimating Measures of Precision for Missing Data in International Migration Flow Tables} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\section{Introduction} In this chapter, estimates for the measures of precision of missing cells in international migration flow tables are derived. ... \\section{Properties of the EM Algorithm} The derivation of the SEM algorithm is dependent on both analytical expressions for the rate of convergence of the EM algorithm and manipulations of the asymptotic variance-covariance matrix of parameter estimates. ... \\subsection{Rate of Convergence in the EM Algorithm} For the EM algorithm described in Section \\ref{sec:em}, the mapping $\\boldsymbol\\theta \\rightarrow M( \\boldsymbol\\theta )$ from the parameter space of $\\boldsymbol\\theta$, to itself is implied. ... \\section{Supplemented EM algorithm} The estimation of $\\Delta\\mathbf{V}$ in (\\ref{eqn:semv2}) can be obtained using the SEM algorithm introduced by \\cite{meng1991uoa}. ... \\section{Akaike Information Criterion for Incomplete Data} Finding a suitable dimension for parameters $\\boldsymbol\\theta$ can be undertaken by comparing several models based on their values of an information criteria, such as the Akaike Information Criterion (AIC) of (\\ref{eqn:emaic}). ... \\section{Estimates of Precision for Missing Data in International Migration Flow Tables} The SEM algorithm can be utilized in the estimation of international migration flow tables. ... \\subsection{Modelling of Complete Data} As no implementable stepwise model selection routine existed for incomplete data, a fit all models function was written to run the SEM algorithm on the complete range of main effects models from the covariate set proposed in Section \\ref{sec:covdis}. ... \\section{Summary and Discussion} The SEM algorithm provides a useful technique when applied to international migration flow tables, where data is often incomplete. ... %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\chapter{Conclusion} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\section{Summary} This study applied computationally intensive mathematical and statistical techniques to develop a methodology to estimate international migration flow tables of comparable data. ... \\subsection{Estimation Over Time} The relative stability in migration definitions and data collection systems provides a basis for harmonizing international migration flow data. ... \\subsection{Accounting for Data Dissemination Problems} As a prelude to the estimation of correction factors, counts of known migrants with unknown origins or destinations were accounted for by distributing these flows according to the existing distributional patterns. ... \\subsection{Ignoring Poor Quality Data} Careful consideration was taken in deciding the eligibility of countries for the estimation of correction factors to scale reported data. ... \\subsection{Model Selection} The EM algorithm was used to impute missing migration flow values. ... \\subsection{Measures of Variation} The SEM algorithm was used in Chapter 6 to obtain an estimate for the asymptotic variance-covariance matrix for parameter estimates, using only the code for an EM algorithm, computations for asymptotic complete data variance-covariance matrix and standard matrix procedures. ... \\section{Context of Study} \\subsection{Modeling International Migration} There exists a wide range of literature on modeling migration (see for example, \\cite{massey1993tim} or \\cite{greenwood2003ehm}). ... \\subsection{International Migration Data} International migration flow data is often incomparable across multiple nations. ... %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\begin{small} \\addcontentsline{toc}{chapter}{Bibliography} \\bibliography{ThesisBib} \\end{small} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \\newpage \\appendix \\noappendicestocpagenum \\addappheadtotoc \\chapter{S-Plus/R Code} \\section{Poulain Constrained Minimization} \\linespread{1} \\begin{small} \\begin{verbatim} poulain <- function(M, nr, base) { if(dim(M) != 2) stop(\"M must be a array of dimensions n x n x 2\") #tidy up data to exclude non-referee (nr) regions M[is.na(M)] <- 0 ... \\end{verbatim} \\end{small} \\newpage \\section{Distance Functions for Constrained Optimization} \\begin{small} \\begin{verbatim} ChiSq <- function(x, M1, M2) { n <- length(x) a <- matrix(x[1:c(n/2)], dim(M1), dim(M1), byrow = T) b <- matrix(x[c(1 + n/2):n], dim(M2), dim(M2)) sum(abs(a * M1 - b * M2)^2/(M1 + M2), na.rm = T) } ... \\end{verbatim} \\end{small} \\newpage \\section{EM Algorithm for Negative Binomial Regression Model} \\begin{small} \\begin{verbatim} glm.nb.EM <- function(model, data, tol, max.it, z0) { if(all(is.missing(pmatch(names(data),\"y\")))==T) stop(\"data must have a response column named y with some missing data\") data$original <- data$y #Initial E-step with some unknown parameter set data$y[is.na(data$original)] <- z0 ... \\end{verbatim} \\end{small} \\newpage \\section{Supplemented EM Algorithm} \\begin{small} \\begin{verbatim} em <- function(beta0, model, data) { #E step fit <- exp(model.matrix(model, data) %*% beta0) data$y[is.na(data$original)] <- c(fit)[is.na(data\\$original)] #M step ... \\end{verbatim} \\end{small} \\end{document}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8021948,"math_prob":0.9829548,"size":21801,"snap":"2023-14-2023-23","text_gpt3_token_len":4940,"char_repetition_ratio":0.19190714,"word_repetition_ratio":0.08265916,"special_character_ratio":0.2671437,"punctuation_ratio":0.11954231,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97434396,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T17:42:25Z\",\"WARC-Record-ID\":\"<urn:uuid:9577f6b5-f78b-487b-911f-c3680cec5b67>\",\"Content-Length\":\"228939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35dfea43-4964-4f1a-b274-1d1ee7a09877>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ed192cb-d02c-4225-a9c6-3f9817b813cd>\",\"WARC-IP-Address\":\"140.82.112.3\",\"WARC-Target-URI\":\"https://gist.github.com/guyabel/9574156\",\"WARC-Payload-Digest\":\"sha1:LEHUHRVEZVO34Q5E32MZ3YXTGDG3AK4U\",\"WARC-Block-Digest\":\"sha1:ODOMQT3LARGHJCCLTJYXRWRHNJU2PLUY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224657735.85_warc_CC-MAIN-20230610164417-20230610194417-00543.warc.gz\"}"}
https://scicomp.stackexchange.com/questions/26279/numerical-computation-of-the-velocity-in-the-steady-navier-stokes-equation/26283
[ "# Numerical computation of the velocity in the steady Navier-Stokes equation\n\nI've asked this question on Math.SE too.\n\nLet\n\n• $d\\in\\left\\{1,\\ldots,4\\right\\}$\n• $\\Lambda\\subseteq\\mathbb R^d$ be bounded, nonempty and open and $\\partial\\Lambda$ be Lipschitz\n• $V:=\\left\\{u\\in H_0^1(\\Lambda,\\mathbb R^d):\\nabla\\cdot u=0\\right\\}$\n• $W:=\\left\\{p\\in L^2(\\Lambda):\\int_\\Lambda p=0\\right\\}$\n• $\\mathfrak a(u,v):=\\sum_{i=1}^d\\langle\\nabla u_i,\\nabla v_i\\rangle_{L^2}$ for $u,v\\in H^1(\\Lambda,\\mathbb R^d)$\n• $\\mathfrak b(p,v):=\\langle p,\\nabla\\cdot v\\rangle_{L^2}$ for $p\\in L^2(\\Lambda)$ and $v\\in H^1(\\Lambda,\\mathbb R^d)$\n• $\\mathfrak c(u,v,w):=\\langle((u\\cdot\\nabla)v,w\\rangle_{L^2}$ for $u,v,w\\in H^1(\\Lambda,\\mathbb R^d)$\n\nThe usually studied variational formulation of the steady Navier-Stokes equation is\n\n\\begin{equation}\\left\\{ \\begin{split} \\mathfrak a(u,v)+\\mathfrak b(p,v)+\\mathfrak c(u,u,v)&=0\\;\\;\\;\\text{for all }v\\in H_0^1(\\Lambda,\\mathbb R^d)\\\\ \\mathfrak b(u,q)&=0\\;\\;\\;\\text{for all }q\\in W \\end{split}\\tag1\\right. \\end{equation}\n\nwhere $(u,p)\\in H_0^1(\\Lambda,\\mathbb R^d)\\times W$ is the searched solution.\n\nI want to solve $(1)$ numerically. Usually, $(1)$ is linearized in some way and then a mixed finite element method is used to approximate a solution.\n\nHowever, I'm not interested in $p$. Now, $$\\mathfrak a(u,v)+\\mathfrak c(u,u,v)=0\\;\\;\\;\\text{for all }v\\in V\\tag2$$ (where $u\\in V$ is the searched solution) is a variational formulation of the steady Navier-Stokes equation which is equivalent to $(1)$ and which doesn't contain $p$.\n\nI wonder if it might be better for me to find a numeric scheme which solves $(2)$. A linearization, e.g. an Oseen iteration, is possible for $(2)$ too. I think the crucial point is the choice of a (conforming) finite element.\n\nSo, the question if I can benefit from the fact that I'm not interested in $p$ and hence don't need to care about its approximation. I've read many papers, but I couldn't find any which tries to solve the steady Navier-Stokes equation for the velocity only.\n\n• It very much depends on what you are modelling -- (2) is not equivalent to (1) unless the pressure gradient is zero (otherwise you'd get a non-zero internal thermodynamic source driving the fluid flow). So even if you are not interested in $p$, the fluid might be, and if you don't solve for it accurately enough, your approximation of $u$ will suffer (and, in fact, be limited by your approximation of $p$, see scicomp.stackexchange.com/a/14157). – Christian Clason Feb 27 '17 at 20:08\n• @ChristianClason If $(u,p)$ solves the first equation in $(1)$, then $u$ solves $(2)$, since $\\mathfrak b(p,v)=0$ for all $v\\in V$. On the other hand, if $u\\in V$ is a solution of $(2)$, then there is a unique $p\\in W$ with $(1)$. So, the problems are equivalent. – 0xbadf00d Feb 27 '17 at 20:20\n• This might just be because this isn't quite the notation I'm used to, but where are your boundary conditions? – origimbo Feb 27 '17 at 20:42\n• Wouldnt that require test functions that are a-priori divergence free? This way you would already be projecting the velocity on a divergence free subspace? How could you construct such test functions? – BlaB Feb 27 '17 at 20:58\n• @origimbo The Dirichlet boundary condition is implicit by the choice of $H_0^1(\\Lambda,\\mathbb R^d)$ as the solution and test function space of $(1)$. – 0xbadf00d Feb 27 '17 at 21:12\n\nIt is difficult, in practice, to construct finite dimensional subspaces of $V$ that are conforming and that still satisfy appropriate approximation properties. For example, finite element spaces lack appropriate continuity properties across faces between cells. But, it is not difficult to use Fourier approximations that are divergence free and that allow you what you want to do. Of course, Fourier approximations require you to work in a box geometry -- and that is their main practical drawback, because the Navier-Stokes equations are just not very interesting from an applied perspective in a box geometry.\n• Just to make sure I've understood what you mean: If we deal with $(1)$, we search a solution $u\\in H_0^1(\\Lambda,\\mathbb R^d)$ of the first equation of $(1)$ (for some $p\\in W$) and enforce $\\nabla\\cdot u=0$ by the second equation of $(1)$. This yields $u\\in V$. I've asked the question: If we're not interested in $p$, why don't we search the solution of the first equation of $(1)$ in $V$ in the first place? And in order to state this problem completely in $V$, we should test against functions from $V$. This immediately leads $(2)$, cause $(\\mathfrak b(p,v)=0$ for all $v\\in V$). – 0xbadf00d Feb 28 '17 at 12:50\n• Doing so, we wouldn't need to deal with mixed finite elements. We would only need to deal with the space $V$. Now, you say the problem is that it's \"difficult\" to construct finite-dimensional subspaces of $V$. Why? And it's not clear to me, why they have poorer approximation properties that the subspace of $H_0^1$. – 0xbadf00d Feb 28 '17 at 12:51\n• What I wanted to say in my answer is that it is possible to do as you suggest, i.e., search for solutions in $V$ (with test functions in $V$). The problem is constructing finite dimensional subspaces. For example, you could use $Q_1$ elements for the $x$ and $y$ velocities, and then choose the $z$ velocity so that the divergence is zero in the interior of the element. But the $z$ velocity may not be continuous across cell interfaces, and so the velocity is not in a subspace of $H^1$, and consequently not in a subspace of $V$. – Wolfgang Bangerth Feb 28 '17 at 13:11\n• In other words, the challenge is to construct finite element spaces whose functions are both divergence free, and sufficiently regular to be a subspace of $H^1$. I don't think I've ever seen that worked out on arbitrary meshes. (Which doesn't mean that it can't be done, but I suspect that you need fairly high polynomial degrees.) – Wolfgang Bangerth Feb 28 '17 at 13:12" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7460843,"math_prob":0.9982353,"size":1951,"snap":"2020-10-2020-16","text_gpt3_token_len":679,"char_repetition_ratio":0.13662045,"word_repetition_ratio":0.024590164,"special_character_ratio":0.3218862,"punctuation_ratio":0.14864865,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999113,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-23T18:04:36Z\",\"WARC-Record-ID\":\"<urn:uuid:9a30e80d-7d98-49e6-be5e-348634cc39ea>\",\"Content-Length\":\"155683\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37186f81-65ee-47a6-9a7c-1d26bd265b63>\",\"WARC-Concurrent-To\":\"<urn:uuid:e8158837-f391-4226-8878-c7bc81e6078b>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://scicomp.stackexchange.com/questions/26279/numerical-computation-of-the-velocity-in-the-steady-navier-stokes-equation/26283\",\"WARC-Payload-Digest\":\"sha1:4BNYLCTMABXGBT6SH2E6KQVIJ62I64ST\",\"WARC-Block-Digest\":\"sha1:RIOJI6DLDDCGPKSG62ANYO5S4U6HK6L2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145818.81_warc_CC-MAIN-20200223154628-20200223184628-00518.warc.gz\"}"}
https://socratic.org/questions/how-do-you-factor-32-18x-2
[ "# How do you factor 32 - 18x^2?\n\nOct 1, 2015\n\nFirst separate out the common scalar factor $2$, then use the difference of squares identity to find:\n\n$32 - 18 {x}^{2} = 2 \\left(4 - 3 x\\right) \\left(4 + 3 x\\right)$\n\n#### Explanation:\n\nThe difference of squares identity is: ${a}^{2} - {b}^{2} = \\left(a - b\\right) \\left(a + b\\right)$\n\nUsing $a = 4$ and $b = 3 x$, we find:\n\n$32 - 18 {x}^{2} = 2 \\left(16 - 9 {x}^{2}\\right) = 2 \\left({4}^{2} - {\\left(3 x\\right)}^{2}\\right) = 2 \\left(4 - 3 x\\right) \\left(4 + 3 x\\right)$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.614123,"math_prob":1.0000093,"size":342,"snap":"2021-43-2021-49","text_gpt3_token_len":118,"char_repetition_ratio":0.10650887,"word_repetition_ratio":0.0,"special_character_ratio":0.3625731,"punctuation_ratio":0.05882353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99974185,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T21:53:06Z\",\"WARC-Record-ID\":\"<urn:uuid:d13ed8a7-453a-402a-aeab-0669f73645bd>\",\"Content-Length\":\"32613\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a6b675b-ab51-4eec-91cb-3b879a88f19e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a807763-e5da-4f68-aa23-ef6b9690b8ec>\",\"WARC-IP-Address\":\"216.239.38.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-factor-32-18x-2\",\"WARC-Payload-Digest\":\"sha1:DN6LYFRN6JY2WSGRZL6NSTAOQV4RYV73\",\"WARC-Block-Digest\":\"sha1:FJ3JKH7IZRRZQA6ATSM4RWJW7MZ6GCGG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363418.83_warc_CC-MAIN-20211207201422-20211207231422-00102.warc.gz\"}"}
https://www.proprofs.com/quiz-school/story.php?title=Excel-Quiz-1-1
[ "# Excel Quiz 1\n\n7 Questions", null, "", null, "Settings", null, "", null, "This test can be used as a pre-test assessment of student knowledge or as a post-lesson test for student comprehension. If you need a refresher before beginning this quiz, go to: http://docs. Google. Com/EmbedSlideshow? Docid=ds6m4nn_3f8cnr74c\n\nRelated Topics\n• 1.\nIn order to save a new document in Microsoft Excel you must select which one of the following tool bar options?\n• A.\n\nEdit\n\n• B.\n\nFormat\n\n• C.\n\nHelp\n\n• D.\n\nFile\n\n• 2.\nAn Excel spreadsheet is primarily used for calculating which of the following options?\n• A.\n\nData\n\n• B.\n\nFinances\n\n• C.\n\nNumbers\n\n• D.\n\nAll of the above\n\n• 3.\nWhich of the following option is a formula?\n• A.\n\n=SUM(A1:A5)\n\n• B.\n\n• C.\n\nSubtract the numbers from A1 to A5\n\n• D.\n\nA1 = A5\n\n• 4.\nWhich one of the following options CANNOT be used in an Excel spreadsheet formula?\n• A.\n\n= (equal sign)\n\n• B.\n\n, (comma)\n\n• C.\n\n& (ampersand)\n\n• D.\n\n: (colon)\n\n• 5.\nWhat is the function of the word '=SUM' at the beginning of an Excel spreadsheet formula?\n• A.\n\n• B.\n\nTo tell the person viewing that this is a function and it should be added together\n\n• C.\n\nTo calculate all the data correctly without any mistakes\n\n• D.\n\nTo inform the computer that an arithmetic function will occur\n\n• 6.\nWhich of the following INCORRECTLY selects multiple cells?\n• A.\n\n(A1:G50)\n\n• B.\n\n(A1, B3:C9)\n\n• C.\n\n(A1:B5:C5)\n\n• D.\n\n(A1:B5#C5)\n\n• 7.\nIn order to add or alter a formula, under which of the following menu options can the 'function' menu be found?\n• A.\n\nTools\n\n• B.\n\nFile\n\n• C.\n\nHelp\n\n• D.\n\nInsert" ]
[ null, "https://www.proprofs.com/quiz-school/images/story_settings_gear.png", null, "https://www.proprofs.com/quiz-school/images/story_settings_gear_color.png", null, "https://www.proprofs.com/quiz-school/images/loader.gif", null, "https://www.proprofs.com/quiz-school/topic_images/p19t5hoe501p13e11hdh17nn3k53.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76401234,"math_prob":0.57883215,"size":737,"snap":"2019-51-2020-05","text_gpt3_token_len":221,"char_repetition_ratio":0.08185539,"word_repetition_ratio":0.0,"special_character_ratio":0.26458615,"punctuation_ratio":0.10958904,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976314,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T04:50:47Z\",\"WARC-Record-ID\":\"<urn:uuid:8b749ad3-2495-4c45-a7c7-ee203fe4b4fc>\",\"Content-Length\":\"98967\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1fe24cf6-4c1e-4f17-b408-7c7151b44359>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4175d61-288a-4d99-8b4f-46bb601fa4d0>\",\"WARC-IP-Address\":\"104.26.13.111\",\"WARC-Target-URI\":\"https://www.proprofs.com/quiz-school/story.php?title=Excel-Quiz-1-1\",\"WARC-Payload-Digest\":\"sha1:Q474N3D2VNFRPRFCQDVOKDHJA6Q7BJO4\",\"WARC-Block-Digest\":\"sha1:JUJLNLXIJTPFGK7ZFA7G7LL6L74K4KW2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540517557.43_warc_CC-MAIN-20191209041847-20191209065847-00099.warc.gz\"}"}
https://socratic.org/questions/54f8c482581e2a3a7d3b2b2b
[ "# Question b2b2b\n\nMar 5, 2015\n\n$\\text{XY}$ will have a lattice energy of $\\text{-2980 kJ/mol}$.\n\nThis is a very simple problem if you're familiar with the Born-Lande equation for calculating lattice energies. Here's how that looks like", null, "I won't list what all the terms in this equation mean because all but three of them are not important. The three terms that you should focus on are\n\n${z}^{+}$ $\\to$ the charge on the positive ion;\n${z}^{-}$ $\\to$ the charge on the negative ion;\n${r}_{0}$ $\\to$ the distance between the two opposite ions;\n\nNow, I believe that you are supposed to ignore the terms that are specific to individual ionic compounds, and assume that all the constants for your hypothetical salt are identical to those for CsF\".\n\nIf that's the case, you can rewrite the above equation like this\n\n$\\text{E} = \\frac{- {N}_{A} \\cdot M \\cdot {e}^{2}}{4 \\cdot \\pi \\cdot {\\epsilon}_{0}} \\cdot \\left(1 - \\frac{1}{n}\\right) \\cdot \\frac{| z {|}^{+} \\cdot | z {|}^{-}}{r} _ 0$\n\n$E = \\text{CONSTANT} \\cdot \\frac{| z {|}^{+} \\cdot | z {|}^{-}}{r} _ 0$\n\nNow, because X\"^(2+) has the same radius as Cs\"^(+) and ${\\text{Y}}^{2 -}$ has the sameradius as ${\\text{F}}^{-}$, the term ${r}_{0}$ will be the same for both salts. Therefore,\n\n$E = \\text{CONSTANT} \\cdot \\left(| z {|}^{+} \\cdot | z {|}^{-}\\right)$\n\nIn the case of $C s F$, ${z}^{+}$ is +1 and ${z}^{-}$ is -1; you can use this to write the lattice energy of $\\text{XY}$ using the lattice energy of $C s F$\n\n${E}_{\\text{XY\") = E_(\"CsF}} \\cdot \\left(| z {|}^{+} \\cdot | z {|}^{-}\\right)$\n\nSince $\\text{XY}$ has ${z}^{+}$ equal to +2 and ${z}^{-}$ equal to -2, you'll get\n\n${E}_{\\text{XY\") = E_(\"CsF\") * (|+2| * |-2|) = 4 * E_(\"CsF}}$\n\nE_(\"XY\") = 4 * (\"-744 kJ/mol\") = \"-2976 kJ/mol\"\n\nRounded to three sig figs, the answer will be\n\nE_(\"XY\") = \"-2980 kJ/mol\"#\n\nMar 5, 2015\n\nThe lattice energy of $X Y$ is $- 2976 \\text{kJ/mol}$\n\nIf we consider 2 charges ${q}_{1}$ and ${q}_{2}$ separated by a distance $r$ they are attracted by a force which is proportional to the product of the charges and inversely proportional to the square of the distance between them:\n\n$F \\propto \\frac{{q}_{1} {q}_{2}}{{r}^{2}}$\n\nSo $F = k \\frac{{q}_{1} {q}_{2}}{{r}^{2}}$\n\nWe can find the work done in separating these charges from a distance $r$ to infinity by integrating between these limits:\n\n$W = k {\\int}_{r}^{\\infty} \\frac{{q}_{1} {q}_{2}}{{r}^{2}} . \\mathrm{dr}$\n\nFrom which we get:\n\n$W = - k \\frac{{q}_{1} {q}_{2}}{r}$\n\nThis means that if we double the charges the work done required to separate them would go up by 2 x 2 = 4.\n\nThis means that in your example the lattice energy would increase to 4 x -744 = $2976 \\text{kJ/mol}$, provided $r$ does not change.\n\nIn this answer I have only considered 2 charges separated by a distance $r$. In reality ionic crystals are made up of giant lattices with attractive and repulsive forces moving out in 3 dimensions and decreasing with distance.", null, "A more accurate expression for lattice enthalpy $U$ which accounts for this is given by:\n\n$U = - \\frac{N M {z}^{2} {e}^{2}}{4 \\pi {\\epsilon}_{0} {r}_{0}} \\left(1 - \\frac{1}{n}\\right)$\n\nI won't go further into the details of this. If you want you can look up \"Madelung Constant\"." ]
[ null, "https://useruploads.socratic.org/31M2rsuvQrWtOg5MAt0y_e4e1219b20a6dffdd490de8fe82c9f08.png", null, "http://www.chemguide.co.uk/atoms/structures/naclexpl.GIF", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9477219,"math_prob":0.99993074,"size":1240,"snap":"2022-05-2022-21","text_gpt3_token_len":283,"char_repetition_ratio":0.10760518,"word_repetition_ratio":0.0,"special_character_ratio":0.21129033,"punctuation_ratio":0.04621849,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999882,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T17:34:46Z\",\"WARC-Record-ID\":\"<urn:uuid:cc8d5de1-97cf-47e1-9e95-fc45798e8e91>\",\"Content-Length\":\"40078\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce7dcadf-0aec-473b-b46d-9d2dc84573eb>\",\"WARC-Concurrent-To\":\"<urn:uuid:c8b999fe-eca8-4f5e-b6f4-4fa64da86564>\",\"WARC-IP-Address\":\"216.239.36.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/54f8c482581e2a3a7d3b2b2b\",\"WARC-Payload-Digest\":\"sha1:K3FPB3CFBEAOWZWDFJFCEX7ZRLWI7UR6\",\"WARC-Block-Digest\":\"sha1:L27IJ3K6ZKQLKF2NRWYOKVHJTGVD77RG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662545875.39_warc_CC-MAIN-20220522160113-20220522190113-00703.warc.gz\"}"}
https://www.webcodeexpert.com/2013/10/c-language-program-to-calculate_22.html
[ "### C++ Language program to calculate factorial of a number using for loop\n\nIn this article i will explain How to write/create a program to find/get/calculate factorial of entered number using For loop in C++ language.\n\nDescription:  To calculate factorial of number, simply multiply  from 1 to that number or that number to 1 i.e. the factorial of a number 'n' is the product of all number from 1 up to the number 'n' or from 'n' up to 1 and it is denoted by n!.\n\nRule: n! = n*(n-1)!\n\nSo 1! = 1\n2! = 2×1 = 2\n3! = 3×2×1 = 6\n4! = 4×3×2×1 = 24\n5! = 5×4×3×2×1 = 120\n\nFor example if n=6 then factorial of 6 will be calculated as 1*2*3*4*5*6= 720 or 6*5*4*3*2*1=720.  So 6 != 720.\n\nImplementation: Let's create a C++ program to calculate Factorial of a number.\n\n#include<conio.h>\n#include<iostream.h>\nint main()\n{\nclrscr();\nunsigned long long int  fact=1;\nint i,n;\ncout<<\"Enter any positive number to calculate its factorial: \";\ncin>>n;\nfor(i=n;i>=1;i--)\n{\nfact=fact*i;\n}\ncout<<\"\\nFactorial of \" <<n<<\" = \"<<fact;\ngetch();\nreturn 0;\n}\n\nNote:  We can also use\nfor(i=1;i<=n;i++)\n{\nfact=fact*i;\n}\nfor(i=n;i>=1;i--)\n{\nfact=fact*i;\n}\nin the above program to calculate the factorial of a number.\n\n• Run the program using Ctrl+F9\nOutput:\n\nEnter any positive number to calculate its factorial: 5\nFactorial of 5 = 120\n\nNow over to you:\n\"If you like my work; you can appreciate by leaving your comments, hitting Facebook like button, following on Google+, Twitter, Linked in and Pinterest, stumbling my posts on stumble upon and subscribing for receiving free updates directly to your inbox . Stay tuned and stay connected for more technical updates.\"\nPrevious\nNext Post »\n\nIf you have any question about any post, Feel free to ask.You can simply drop a comment below post or contact via Contact Us form. Your feedback and suggestions will be highly appreciated. Also try to leave comments from your account not from the anonymous account so that i can respond to you easily.." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8234763,"math_prob":0.9772918,"size":2357,"snap":"2019-51-2020-05","text_gpt3_token_len":619,"char_repetition_ratio":0.13302167,"word_repetition_ratio":0.05955335,"special_character_ratio":0.28086552,"punctuation_ratio":0.12598425,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9972929,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T09:42:42Z\",\"WARC-Record-ID\":\"<urn:uuid:028434a6-c859-4f13-a21f-cc3a9c9c2bba>\",\"Content-Length\":\"194333\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fd289153-b2b5-4b4d-86a2-ea8ab10274fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:b011bf2a-1016-4838-8db3-130b2b8758b9>\",\"WARC-IP-Address\":\"172.217.12.243\",\"WARC-Target-URI\":\"https://www.webcodeexpert.com/2013/10/c-language-program-to-calculate_22.html\",\"WARC-Payload-Digest\":\"sha1:BALYGGFSYKLOWN2T3EWYAK34AFQ3MIBJ\",\"WARC-Block-Digest\":\"sha1:2RAQXZ7W7LHN3E73A63A356UXU4GZMFC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594391.21_warc_CC-MAIN-20200119093733-20200119121733-00217.warc.gz\"}"}
https://tools.carboncollective.co/compound-interest/63566-at-40-percent-in-11-years/
[ "# What is the compound interest on $63566 at 40% over 11 years? If you want to invest$63,566 over 11 years, and you expect it will earn 40.00% in annual interest, your investment will have grown to become $2,574,146.60. If you're on this page, you probably already know what compound interest is and how a sum of money can grow at a faster rate each year, as the interest is added to the original principal amount and recalculated for each period. The actual rate that$63,566 compounds at is dependent on the frequency of the compounding periods. In this article, to keep things simple, we are using an annual compounding period of 11 years, but it could be monthly, weekly, daily, or even continuously compounding.\n\nThe formula for calculating compound interest is:\n\n$$A = P(1 + \\dfrac{r}{n})^{nt}$$\n\n• A is the amount of money after the compounding periods\n• P is the principal amount\n• r is the annual interest rate\n• n is the number of compounding periods per year\n• t is the number of years\n\nWe can now input the variables for the formula to confirm that it does work as expected and calculates the correct amount of compound interest.\n\nFor this formula, we need to convert the rate, 40.00% into a decimal, which would be 0.4.\n\n$$A = 63566(1 + \\dfrac{ 0.4 }{1})^{ 11}$$\n\nAs you can see, we are ignoring the n when calculating this to the power of 11 because our example is for annual compounding, or one period per year, so 11 × 1 = 11.\n\n## How the compound interest on $63,566 grows over time The interest from previous periods is added to the principal amount, and this grows the sum a rate that always accelerating. The table below shows how the amount increases over the 11 years it is compounding: Start Balance Interest End Balance 1$63,566.00 $25,426.40$88,992.40\n2 $88,992.40$35,596.96 $124,589.36 3$124,589.36 $49,835.74$174,425.10\n4 $174,425.10$69,770.04 $244,195.15 5$244,195.15 $97,678.06$341,873.20\n6 $341,873.20$136,749.28 $478,622.49 7$478,622.49 $191,448.99$670,071.48\n8 $670,071.48$268,028.59 $938,100.07 9$938,100.07 $375,240.03$1,313,340.10\n10 $1,313,340.10$525,336.04 $1,838,676.14 11$1,838,676.14 $735,470.46$2,574,146.60\n\nWe can also display this data on a chart to show you how the compounding increases with each compounding period.\n\nIn this example we have 11 years of compounding, but to truly see the power of compound interest, it might be better to look at a larger number of compounding periods to see how much $63,566 can grow. If you want an example with more compounding years, click here to view the compounding interest of$63,566 at 40.00% over 30 years.\n\nAs you can see if you view the compounding chart for $63,566 at 40.00% over a long enough period of time, the rate at which it grows increases over time as the interest is added to the balance and new interest calculated from that figure. ## How long would it take to double$63,566 at 40% interest?\n\nAnother commonly asked question about compounding interest would be to calculate how long it would take to double your investment of $63,566 assuming an interest rate of 40.00%. We can calculate this very approximately using the Rule of 72. The formula for this is very simple: $$Years = \\dfrac{72}{Interest\\: Rate}$$ By dividing 72 by the interest rate given, we can calculate the rough number of years it would take to double the money. Let's add our rate to the formula and calculate this: $$Years = \\dfrac{72}{ 40 } = 1.8$$ Using this, we know that any amount we invest at 40.00% would double itself in approximately 1.8 years. So$63,566 would be worth $127,132 in ~1.8 years. We can also calculate the exact length of time it will take to double an amount at 40.00% using a slightly more complex formula: $$Years = \\dfrac{log(2)}{log(1 + 0.4)} = 2.06\\; years$$ Here, we use the decimal format of the interest rate, and use the logarithm math function to calculate the exact value. As you can see, the exact calculation is very close to the Rule of 72 calculation, which is much easier to remember. Hopefully, this article has helped you to understand the compound interest you might achieve from investing$63,566 at 40.00% over a 11 year investment period." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9304919,"math_prob":0.99807304,"size":4146,"snap":"2023-14-2023-23","text_gpt3_token_len":1190,"char_repetition_ratio":0.15813616,"word_repetition_ratio":0.014367816,"special_character_ratio":0.3540762,"punctuation_ratio":0.16438356,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99984324,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T10:32:45Z\",\"WARC-Record-ID\":\"<urn:uuid:71b1d3e4-3060-40a3-8dd8-41282f694cdc>\",\"Content-Length\":\"26592\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b09226a4-f77f-429d-8e87-ccd6746eb894>\",\"WARC-Concurrent-To\":\"<urn:uuid:10ecda9d-51e6-41c1-81d3-d679146dccfd>\",\"WARC-IP-Address\":\"138.197.3.89\",\"WARC-Target-URI\":\"https://tools.carboncollective.co/compound-interest/63566-at-40-percent-in-11-years/\",\"WARC-Payload-Digest\":\"sha1:IVBZTOXGCIYTDKU2XQZFMUGJJ26HSI65\",\"WARC-Block-Digest\":\"sha1:26WM2QNQ5QCXK242KTJ3YUEHSKC7JOYO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949958.54_warc_CC-MAIN-20230401094611-20230401124611-00744.warc.gz\"}"}
https://users.rust-lang.org/t/need-help-with-vec-iter-with-map/46814
[ "", null, "# Need help with vec iter with map\n\nI'm trying to populate a vector with tuples using inter and map and then accessing the vector using another iter.\n\nIn this case pay no attention to the fact tuples are being used in this way since I'm simplifying code that is using window creations instead of tuples.\n\nI'm not sure why, but the first iter seems to be failing to populate the vec with tuples.\n\nCode:\n\n``````fn create_tuple1() -> (f64, f64) {\n(5.0, 6.0)\n}\n\nfn create_tuple2() -> (f64, f64) {\n(8.0, 9.0)\n}\n\nfn main() {\nlet mut multi_tuples: Vec<_> = (0..2 as usize)\n.into_iter()\n.map(|n| {\nmatch n {\n0 => create_tuple1(),\n1 => create_tuple2(),\n_ => {}\n};\n})\n.collect();\n\nloop {\nlet mut x = false;\nfor (i, tuple) in multi_tuples.iter_mut().enumerate() {\nmatch i {\n0 => {\nprintln!(\"Tuple number = {}\", i);\nprintln!(\"Tuple = {:?}\", tuple);\n}\n1 => {\nprintln!(\"Tuple number = {}\", i);\n}\n_ => {}\n}\n}\nif !x {\nbreak;\n}\n}\n}\n\n``````\n\nError is:\n\n`````` Compiling playground v0.0.1 (/playground)\nerror[E0308]: `match` arms have incompatible types\n--> src/main.rs:16:22\n|\n13 | / match n {\n14 | | 0 => create_tuple1(),\n| | --------------- this is found to be of type `(f64, f64)`\n15 | | 1 => create_tuple2(),\n| | --------------- this is found to be of type `(f64, f64)`\n16 | | _ => {}\n| | ^^ expected tuple, found `()`\n17 | | };\n| |_____________- `match` arms have incompatible types\n|\n= note: expected tuple `(f64, f64)`\nfound unit type `()`\n\n``````\n\nOk, I replaced the first iter code with (and it worked):\n\n`````` let mut multi_tuples: Vec<(f64, f64)> = vec![];\nmulti_tuples.push(create_tuple1());\nmulti_tuples.push(create_tuple2());\n``````\n\nbut I'd still like to know why this iter didn't work and if there is a way to make it work the way I want:\n\n`````` let mut multi_tuples: Vec<_> = (0..2 as usize)\n.into_iter()\n.map(|n| {\nmatch n {\n0 => create_tuple1(),\n1 => create_tuple2(),\n_ => {}\n};\n})\n.collect();\n``````\n\n`map` requires the same value for all arms of match. Use `filter_map` instead and either return `Some(tuple)` or `None`\n\nIf you know that every value in the `Vec` is either `0` or `1`, you can use something that panics. Like:\n\n`````` let mut multi_tuples: Vec<_> = (0..2 as usize)\n.into_iter()\n.map(|n| {\nmatch n {\n0 => create_tuple1(),\n1 => create_tuple2(),\n_ => unreachable!()\n};\n})\n.collect();\n``````\n1 Like\n\nJust in case this wasn’t clear already:\n\nIf this is just for initialization, you can call the `create_tuple*()` functions in the `vec!` macro expression:\n\n``````let mut multi_tuples: Vec<(f64, f64)> = vec![create_tuple1(), create_tuple2()];\n``````\n\nThank You for the clarity!!\n\nCould you please give me a code example?\n\nSure. This should work\n\n``````let mut multi_tuples: Vec<_> = (0..2 as usize)\n.into_iter()\n.filter_map(|n| {\nmatch n {\n0 => Some(create_tuple1()),\n1 => Some(create_tuple2()),\n_ => None\n};\n})\n.collect();\n``````\n\nThank You! Super educational!\n\nThis topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments." ]
[ null, "https://aws1.discourse-cdn.com/business5/uploads/rust_lang/original/2X/e/e260a60b8dca4dae6ce7db98c45bb5008e6fdc62.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7198878,"math_prob":0.972802,"size":1377,"snap":"2021-04-2021-17","text_gpt3_token_len":401,"char_repetition_ratio":0.13911143,"word_repetition_ratio":0.09448819,"special_character_ratio":0.4415396,"punctuation_ratio":0.21455939,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9747896,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T19:02:01Z\",\"WARC-Record-ID\":\"<urn:uuid:b011a2cf-4a0b-4020-83e8-f3bde194d7f8>\",\"Content-Length\":\"35067\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bdfe8839-63ca-4e34-920a-833f9b8e561a>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c359955-4961-4efa-9bcd-b44d22dce749>\",\"WARC-IP-Address\":\"72.52.80.20\",\"WARC-Target-URI\":\"https://users.rust-lang.org/t/need-help-with-vec-iter-with-map/46814\",\"WARC-Payload-Digest\":\"sha1:MIUVZEIZDZLSRZN6BLEMN3VWT2GYWG6H\",\"WARC-Block-Digest\":\"sha1:BDGMIBAQAN5ALNLWSCVO7LULBQLA2A62\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703527224.75_warc_CC-MAIN-20210121163356-20210121193356-00316.warc.gz\"}"}
https://www.ebi.ac.uk/ols/ontologies/mamo/terms?short_form=MAMO_0000023
[ "This version of OLS (OLS3) is no longer updated and will be replaced by OLS4 on 30th October 2023.\n\nHelp us test the new version of OLS, with updated versions of ontologies and lots of new features!\n\n## computational model\n\nGo to external page http://identifiers.org/mamo/MAMO_0000023\n\nThis is just here as a test because I lose it\n\n#### Term information\n\ndefinition\n• mathematical model that requires computer simulations to study the behavior of a complex system. The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by adjusting the parameters of the system in the computer, and studying the differences in the outcome of the experiments.\nexample\n• weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, neural network models.\nseeAlso\n• http://en.wikipedia.org/wiki/Computational_model\n\nSubclass of:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83073634,"math_prob":0.6471473,"size":852,"snap":"2023-40-2023-50","text_gpt3_token_len":162,"char_repetition_ratio":0.12617925,"word_repetition_ratio":0.0,"special_character_ratio":0.17840375,"punctuation_ratio":0.12230216,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96343094,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T05:12:02Z\",\"WARC-Record-ID\":\"<urn:uuid:90b4f704-87db-45bd-bd7a-bb8dd41f2440>\",\"Content-Length\":\"36542\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea20bc61-f91b-40ab-b83d-bad6576b8606>\",\"WARC-Concurrent-To\":\"<urn:uuid:48c4b1ef-d9d2-4609-9a9e-893face06f86>\",\"WARC-IP-Address\":\"193.62.193.80\",\"WARC-Target-URI\":\"https://www.ebi.ac.uk/ols/ontologies/mamo/terms?short_form=MAMO_0000023\",\"WARC-Payload-Digest\":\"sha1:WGDLM6VKJRBLEY35CC2A32XRTD6FSODU\",\"WARC-Block-Digest\":\"sha1:GHI3J7AMNZTNAW72V7XPMWB7MTPKE6TY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510603.89_warc_CC-MAIN-20230930050118-20230930080118-00776.warc.gz\"}"}
http://olympiads.win.tue.nl/imo/imo97/imo97d1.html
[ "Version: English\n\nFirst day\nMar del Plata, Argentina - July 24, 1997\n\n1\n\nIn the plane the points with integer coordinates are the vertices of unit squares. The squares are coloured alternately black and white (as on a chessboard).\n\nFor any pair of positive integers m and n, consider a right-angled triangle whose vertices have integer coordinates and whose legs, of lengths m and n, lie along edges of the squares.\n\nLet S1 be the total area of the black part of the triangle and S2 be the total area of the white part. Let\n\nf(m,n) = | S1 - S2 |.\n\n(a) Calculate f(m,n) for all positive integers m and n which are either both even or both odd.\n\n(b) Prove that", null, "for all m and n.\n\n(c) Show that there is no constant C such that f(m,n) < C for all m and n.\n\n2\n\nAngle A is the smallest in the triangle ABC.\n\nThe points B and C divide the circumcircle of the triangle into two arcs. Let U be an interior point of the arc between B and C which does not contain A.\n\nThe perpendicular bisectors of AB and AC meet the line AU at V and W, respectively. The lines BV and CW meet at T.\n\nShow that\n\nAU = TB + TC.\n\n3\n\nLet x1, x2, ... , xn be real numbers satisfying the conditions:\n\n|x1 + x2 + ... + xn | = 1\n\nand", null, "for i = 1, 2, ... , n.\n\nShow that there exists a permutation   y1, y2, ... , yn  of   x1, x2, ... , xn  such that", null, ".\n\nEach problem is worth 7 points\nTime: 4 1/2 hours." ]
[ null, "http://olympiads.win.tue.nl/imo/imo97/eq_2.gif", null, "http://olympiads.win.tue.nl/imo/imo97/eq_5.gif", null, "http://olympiads.win.tue.nl/imo/imo97/eq_6.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8878338,"math_prob":0.9979504,"size":1287,"snap":"2020-10-2020-16","text_gpt3_token_len":352,"char_repetition_ratio":0.12392829,"word_repetition_ratio":0.029962547,"special_character_ratio":0.2937063,"punctuation_ratio":0.17532468,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9968372,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T05:30:38Z\",\"WARC-Record-ID\":\"<urn:uuid:5dd86f14-5197-4f6b-9acb-b4a332021094>\",\"Content-Length\":\"3316\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a0a367a7-9d0e-41bc-92b7-8df57af63a44>\",\"WARC-Concurrent-To\":\"<urn:uuid:e7957a37-852b-467b-ab43-a614eb7099bc>\",\"WARC-IP-Address\":\"37.128.148.44\",\"WARC-Target-URI\":\"http://olympiads.win.tue.nl/imo/imo97/imo97d1.html\",\"WARC-Payload-Digest\":\"sha1:ML6TQOUT2HLUGEKOKPZH6MP5I4GRSZN2\",\"WARC-Block-Digest\":\"sha1:3F45HORC4NRUU37HUBOYFI7F6LC7MRJG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145897.19_warc_CC-MAIN-20200224040929-20200224070929-00457.warc.gz\"}"}
https://convertoctopus.com/250-fluid-ounces-to-liters
[ "## Conversion formula\n\nThe conversion factor from fluid ounces to liters is 0.0295735296875, which means that 1 fluid ounce is equal to 0.0295735296875 liters:\n\n1 fl oz = 0.0295735296875 L\n\nTo convert 250 fluid ounces into liters we have to multiply 250 by the conversion factor in order to get the volume amount from fluid ounces to liters. We can also form a simple proportion to calculate the result:\n\n1 fl oz → 0.0295735296875 L\n\n250 fl oz → V(L)\n\nSolve the above proportion to obtain the volume V in liters:\n\nV(L) = 250 fl oz × 0.0295735296875 L\n\nV(L) = 7.393382421875 L\n\nThe final result is:\n\n250 fl oz → 7.393382421875 L\n\nWe conclude that 250 fluid ounces is equivalent to 7.393382421875 liters:\n\n250 fluid ounces = 7.393382421875 liters", null, "## Alternative conversion\n\nWe can also convert by utilizing the inverse value of the conversion factor. In this case 1 liter is equal to 0.13525609023568 × 250 fluid ounces.\n\nAnother way is saying that 250 fluid ounces is equal to 1 ÷ 0.13525609023568 liters.\n\n## Approximate result\n\nFor practical purposes we can round our final result to an approximate numerical value. We can say that two hundred fifty fluid ounces is approximately seven point three nine three liters:\n\n250 fl oz ≅ 7.393 L\n\nAn alternative is also that one liter is approximately zero point one three five times two hundred fifty fluid ounces.\n\n## Conversion table\n\n### fluid ounces to liters chart\n\nFor quick reference purposes, below is the conversion table you can use to convert from fluid ounces to liters\n\nfluid ounces (fl oz) liters (L)\n251 fluid ounces 7.423 liters\n252 fluid ounces 7.453 liters\n253 fluid ounces 7.482 liters\n254 fluid ounces 7.512 liters\n255 fluid ounces 7.541 liters\n256 fluid ounces 7.571 liters\n257 fluid ounces 7.6 liters\n258 fluid ounces 7.63 liters\n259 fluid ounces 7.66 liters\n260 fluid ounces 7.689 liters" ]
[ null, "https://convertoctopus.com/images/250-fluid-ounces-to-liters", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76773214,"math_prob":0.9793518,"size":1853,"snap":"2020-24-2020-29","text_gpt3_token_len":485,"char_repetition_ratio":0.2385073,"word_repetition_ratio":0.012861736,"special_character_ratio":0.35024285,"punctuation_ratio":0.09863014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99289095,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-06T03:56:52Z\",\"WARC-Record-ID\":\"<urn:uuid:40e0178a-f550-4117-9924-1100cb920a6d>\",\"Content-Length\":\"32232\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df019c6a-569c-45eb-ae8f-f629d98acfc6>\",\"WARC-Concurrent-To\":\"<urn:uuid:19f7876b-4876-44ea-8370-4c0144616ce5>\",\"WARC-IP-Address\":\"104.27.142.66\",\"WARC-Target-URI\":\"https://convertoctopus.com/250-fluid-ounces-to-liters\",\"WARC-Payload-Digest\":\"sha1:HV7JB4YLKS3NWAPV72QRPKB7WZJXZ3R4\",\"WARC-Block-Digest\":\"sha1:HA6NO2U5YR4SLDGPXRYASJZBYISUYOVY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348509972.80_warc_CC-MAIN-20200606031557-20200606061557-00400.warc.gz\"}"}
https://www.biostars.org/p/9494978/
[ "Compare elements of a vector and choose elements from vector to be eliminated in a data set\n1\n0\nEntering edit mode\n6 weeks ago\nBioinfo ▴ 20\n\nMy aim is to eliminate dupliccation in dataframe\n\ni wrote a program that determine variables that have the same values in row 17 , next the program put these variables in other data and calculate correlation matrix , i set percentage of this correlation matrix to be 95% it means the program create vector that contain only variables names that correlated more than 95%\n\nfor example vector contain name of variables\n\n>Vector\n\n\"MT91\" \"MT92\" \"MT93\"\n\n\ni want to use this vector to calculate the sum of these variables in all the other lines\n\nfor example i have this data :\n\nName\n\n MT91 MT93 MT92 MT95\nQC_G1 70027.02132 95774.1359 100 24\nQC_G2 69578.18634 81479.29575 200 45\nQC_G3 69578.18634 87021.95427 10 42545\nQC_G4 68231.14338 95558.76738 1000 425\nQC_G5 64874.12936 96780.77245 7000 4545\nQC_G6 63866.65780 91854.35304 19 455\nCtr1 66954.38799 128861.36163 199 2424\nCtr2 97352.55229 101353.25927 155 344\nCtr3 1252.42545 115683.73755 188 3434\nBti1 81873.96379 112164.14229 1222 444\nBti2 84981.21914 0.00000 100 3443\nBti3 36629.02462 124806.49101 188 3434\nBti4 0.00000 109927.26425 122 1000\nrt 13.90181 13.90586 12 13\n\n\nSo i want to use the vector to calculate the sum of each variables in all the rows except the 17th row , after that i want to keep only the variable that have the highest sum, as you can see it's my vector contain the variables : \"MT91\" \"MT92\" \"MT93\" and it's MT93 that have the highest sum in the 16 rows so i want to eliminate MT91 and MT92\n\nThe result will be :\n\n MT93 MT95\nQC_G1 95774.1359 24\nQC_G2 81479.29575 45\nQC_G3 87021.95427 42545\nQC_G4 95558.76738 425\nQC_G5 96780.77245 4545\nQC_G6 91854.35304 455\nCtr1 128861.36163 2424\nCtr2 101353.25927 344\nCtr3 115683.73755 3434\nBti1 112164.14229 444\nBti2 0.00000 3443\nBti3 124806.49101 3434\nBti4 109927.26425 1000\nrt 3.90586 13\n\n\nNote that the vector is generated by the program that will generate a lot of vectors (i'm using for loops) so i don't know the length of the vectors neither the name of the variables in the loops\n\nPlease tell me if you want any clarification Thank you\n\ndataframe vectors R statistics • 155 views\n0\nEntering edit mode\n6 weeks ago\n\nYou could use colSums() to calculate the sums of the different variables. In your case, you don't want to use the 17th row for calculation, so you would omit it. But first, you can subset your data for the columns in Vector.\n\ndata_subset <- subset(data, select = Vector)\ndata_subset <- data_subset[c(1:16, 18:nrow(data_subset)),]\n\n\nYou then want to highlight the columns/variables that are NOT in Vector.\n\nall_columns <- colnames(data)\nsubset_columns <- setdiff(all_columns, Vector)\n\n\nYou then can use colSums to calculate the max column and extract the column name in your data-set based upon the subsetted data:\n\ncolumn_sums <- colSums(data_subset)\nmax_col <- which(column_sums == max(column_sums))\nmax_col <- names(max_col)\n\n\nThe only caveat is that I'm not sure if there are cases when Vector could contain all of the variables/column names. If that is a possibility, then subset_columns (the difference between the names in Vector and the column names of data_subset) would equal zero. Thus, you would want to add an if/else statement to check:\n\nif (identical(subset_columns, character(0)) {\n\nsubset_columns <- max_col\n\n} else {\n\nsubset_columns <- c(max_col, subset_columns)\n}\n\n\nYou can then subset the original data with the max column from Vector and the remaining columns that were not included in Vector (if there were any.)\n\ndata <- subset(data, select = subset_columns)\n\n\nAltogether:\n\ndata_subset <- subset(data, select = Vector)\ndata_subset <- data_subset[c(1:16, 18:nrow(data)),]\nall_columns <- colnames(data)\nsubset_columns <- setdiff(all_columns, Vector)\n\nmax_col <- names(which(colSums(data_subset) == max(colSums(data_subset))))\n\nif (identical(subset_columns, character(0)) {\n\nsubset_columns <- max_col\n\n} else {\n\nsubset_columns <- c(max_col, subset_columns)\n}\n\ndata <- subset(data, select = subset_columns)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68842095,"math_prob":0.9456663,"size":2029,"snap":"2021-43-2021-49","text_gpt3_token_len":733,"char_repetition_ratio":0.122469135,"word_repetition_ratio":0.01179941,"special_character_ratio":0.5091178,"punctuation_ratio":0.11448598,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9962576,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T14:40:37Z\",\"WARC-Record-ID\":\"<urn:uuid:0cb504e9-3eaa-44f0-b99d-a94b1722013a>\",\"Content-Length\":\"24445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eaf76285-924a-4dea-a6b5-8cd539647885>\",\"WARC-Concurrent-To\":\"<urn:uuid:3aaab695-67e5-44cb-82c6-d5f8fed53e09>\",\"WARC-IP-Address\":\"45.79.169.51\",\"WARC-Target-URI\":\"https://www.biostars.org/p/9494978/\",\"WARC-Payload-Digest\":\"sha1:K4PITOFQ2CE23RYURIGDOTGLGY4LV6KX\",\"WARC-Block-Digest\":\"sha1:75PQHIHA3Q7EKPCXSGIBTTBB7GYE75HS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964364169.99_warc_CC-MAIN-20211209122503-20211209152503-00611.warc.gz\"}"}
https://www.juhe.cn/news/index/id/1266
[ "API接口,开发服务,免费咨询服务\n\n``````<!DOCTYPE html>\n<html>\n<body>\n<div class=\"progress\" id=\"progress\">0%</div>\n</div>\n</body>\n</html>``````\n\n``````.loading {\ndisplay: table;\nposition: fixed;\ntop: 0;\nleft: 0;\nwidth: 100%;\nheight: 100%;\nbackground-color: #fff;\nz-index: 5;\n}\n\ndisplay: table-cell;\nvertical-align: middle;\ntext-align: center;\n}``````\n\n(以下内容为了方便演示,默认使用jQuery,语法有es6的箭头函数)\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\n\n}``````\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0  // 初始化进度\n\nvar timer = window.setInterval(() => {  // 设置定时器  if (prg >= 100) {  // 到达终点,关闭定时器\nwindow.clearInterval(timer)\nprg = 100\n} else {  // 未到终点,进度自增\nprg++\n}\n\n\\$progress.html(prg + '%')\nconsole.log(prg)\n}, 100)\n\n}``````\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = window.setInterval(() => {  if (prg >= 80) {  // 到达第一阶段80%,关闭定时器,保持等待\nwindow.clearInterval(timer)\nprg = 100\n} else {\nprg++\n}\n\n\\$progress.html(prg + '%')\nconsole.log(prg)\n}, 100)\n\nwindow.clearInterval(timer)\nwindow.setInterval(() => {    if (prg >= 100) {  // 到达终点,关闭定时器\nwindow.clearInterval(timer)\nprg = 100\n} else {\nprg++\n}\n\n\\$progress.html(prg + '%')\nconsole.log(prg)\n}, 10)  // 时间间隔缩短\n}``````\n\nok,这差不多就是我们想要的功能了,我们来提炼一下代码,把重复的代码给封装一下:\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = 0progress(80, 100)\n\nprogress(100, 10, () => {\n})\n}\n\nfunction progress (dist, delay, callback) {\nwindow.clearInterval(timer)\ntimer = window.setInterval(() => {    if (prg >= dist) {\nwindow.clearInterval(timer)\nprg = dist\ncallback && callback()\n} else {\nprg++\n}\n\n\\$progress.html(prg + '%')\nconsole.log(prg)\n}, delay)\n}``````\n\n1. 进度太平均,相同的时间间隔,相同的增量,不符合网络环境的特点;\n\n3. 每次第一阶段都是在80%就暂停了,露馅儿了;\n\n03让时间间隔随机\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = 0progress([80, 90], [1, 3], 100)  // 使用数组来表示随机数的区间\n\nprogress(100, [1, 5], 10, () => {\n}, 1000)\n})\n}\n\nfunction progress (dist, speed, delay, callback) {\nvar _dist = random(dist)\nvar _delay = random(delay)\nvar _speed = random(speed)\nwindow.clearTimeout(timer)\ntimer = window.setTimeout(() => {    if (prg + _speed >= _dist) {\nwindow.clearTimeout(timer)\nprg = _dist\ncallback && callback()\n} else {\nprg += _speed\nprogress (_dist, speed, delay, callback)\n}\n\n\\$progress.html(parseInt(prg) + '%')  // 留意,由于已经不是自增1,所以这里要取整\nconsole.log(prg)\n}, _delay)\n}\n\nfunction random (n) {  if (typeof n === 'object') {\nvar times = n - n\nvar offset = n    return Math.random() * times + offset\n} else {    return n\n}\n}``````\n\n04超时时间设置\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = 0progress([80, 90], [1, 3], 100)  // 使用数组来表示随机数的区间\n\nprogress(100, [1, 5], 10, () => {\n}, 1000)\n})\n}\n\nwindow.setTimeout(() => {  // 设置5秒的超时时间\nprogress(100, [1, 5], 10, () => {\n}, 1000)\n})\n}, 5000)\n\nfunction progress (dist, speed, delay, callback) {\nvar _dist = random(dist)\nvar _delay = random(delay)\nvar _speed = random(speed)\nwindow.clearTimeout(timer)\ntimer = window.setTimeout(() => {    if (prg + _speed >= _dist) {\nwindow.clearTimeout(timer)\nprg = _dist\ncallback && callback()\n} else {\nprg += _speed\nprogress (_dist, speed, delay, callback)\n}\n\n\\$progress.html(parseInt(prg) + '%')  // 留意,由于已经不是自增1,所以这里要取整\nconsole.log(prg)\n}, _delay)\n}\n\nfunction random (n) {  if (typeof n === 'object') {\nvar times = n - n\nvar offset = n    return Math.random() * times + offset\n} else {    return n\n}\n}``````\n\n``````<!DOCTYPE html>\n<html>\n<script>\n</script>\n<script src=\"index.js\"></script>\n<body>\n<div class=\"progress\" id=\"progress\">0%</div>\n</div>\n</body>\n</html>``````\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = 0var now = new Date()  // 记录当前时间\nvar timeout = 5000  // 超时时间\n\nprogress([80, 90], [1, 3], 100)\n\ncomplete()\ncomplete()\n} else {\nwindow.setTimeout(() => {  // 未超时,则等待剩余时间\ncomplete()\n}\n\nfunction complete () {  // 封装完成进度功能\nprogress(100, [1, 5], 10, () => {\nwindow.setTimeout(() => {\n}, 1000)\n})\n}\n\nfunction progress (dist, speed, delay, callback) {\nvar _dist = random(dist)\nvar _delay = random(delay)\nvar _speed = random(speed)\nwindow.clearTimeout(timer)\ntimer = window.setTimeout(() => {    if (prg + _speed >= _dist) {\nwindow.clearTimeout(timer)\nprg = _dist\ncallback && callback()\n} else {\nprg += _speed\nprogress (_dist, speed, delay, callback)\n}\n\n\\$progress.html(parseInt(prg) + '%')\nconsole.log(prg)\n}, _delay)\n}\n\nfunction random (n) {  if (typeof n === 'object') {\nvar times = n - n\nvar offset = n    return Math.random() * times + offset\n} else {    return n\n}\n}``````\n\n05具体场景分析\n\n1. 我们需要一个能够替我们累计增量的变量next;\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = 0var now = new Date()\nvar timeout = 5000var next = prg\n\nadd([30, 50], [1, 3], 100)  // 第一阶段\n\nwindow.setTimeout(() => {  // 模拟图a加载完\n}, 1000)\n\nwindow.setTimeout(() => {  // 模拟图c加载完\n}, 2000)\n\nwindow.setTimeout(() => {  // 模拟图b加载完\n}, 2500)\n\ncomplete()\ncomplete()\n} else {\nwindow.setTimeout(() => {\ncomplete()\n}\n\nfunction complete () {\nadd(100, [1, 5], 10, () => {\nwindow.setTimeout(() => {\n}, 1000)\n})\n}\n\nfunction add (dist, speed, delay, callback) {\nvar _dist = random(dist)  if (next + _dist > 100) {  // 对超出部分裁剪对齐\nnext = 100\n} else {\nnext += _dist\n}\n\nprogress(next, speed, delay, callback)\n}\n\nfunction progress (dist, speed, delay, callback) {\nvar _delay = random(delay)\nvar _speed = random(speed)\nwindow.clearTimeout(timer)\ntimer = window.setTimeout(() => {    if (prg + _speed >= dist) {\nwindow.clearTimeout(timer)\nprg = dist\ncallback && callback()\n} else {\nprg += _speed\nprogress (dist, speed, delay, callback)\n}\n\n\\$progress.html(parseInt(prg) + '%')\nconsole.log(prg)\n}, _delay)\n}\n\nfunction random (n) {  if (typeof n === 'object') {\nvar times = n - n\nvar offset = n    return Math.random() * times + offset\n} else {    return n\n}\n}``````\n\n06结束\n\nez-progress 是一个web(伪)进度插件,使用 ez-progress 实现这个功能非常简单:\n\n``````var Progress = require('ez-progress')\nvar prg = new Progress()\n\nvar \\$progress = \\$('#progress')\n\nprg.on('progress', function (res) {\nvar progress = parseInt(res.progress)  // 注意进度取整,不然有可能会出现小数\n\\$progress.html(progress + '%')\n})\n\nprg.go([60, 70], function (res) {\nprg.complete(null, [0, 5], [0, 50])  // 飞一般地冲向终点\n}, [0, 3], [0, 200])\n\nprg.complete(null, [0, 5], [0, 50])  // 飞一般地冲向终点\n}``````\n\nAPI资讯\n\nAPI接口,开发服务,免费咨询服务\n\n``````<!DOCTYPE html>\n<html>\n<body>\n<div class=\"progress\" id=\"progress\">0%</div>\n</div>\n</body>\n</html>``````\n\n``````.loading {\ndisplay: table;\nposition: fixed;\ntop: 0;\nleft: 0;\nwidth: 100%;\nheight: 100%;\nbackground-color: #fff;\nz-index: 5;\n}\n\ndisplay: table-cell;\nvertical-align: middle;\ntext-align: center;\n}``````\n\n(以下内容为了方便演示,默认使用jQuery,语法有es6的箭头函数)\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\n\n}``````\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0  // 初始化进度\n\nvar timer = window.setInterval(() => {  // 设置定时器  if (prg >= 100) {  // 到达终点,关闭定时器\nwindow.clearInterval(timer)\nprg = 100\n} else {  // 未到终点,进度自增\nprg++\n}\n\n\\$progress.html(prg + '%')\nconsole.log(prg)\n}, 100)\n\n}``````\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = window.setInterval(() => {  if (prg >= 80) {  // 到达第一阶段80%,关闭定时器,保持等待\nwindow.clearInterval(timer)\nprg = 100\n} else {\nprg++\n}\n\n\\$progress.html(prg + '%')\nconsole.log(prg)\n}, 100)\n\nwindow.clearInterval(timer)\nwindow.setInterval(() => {    if (prg >= 100) {  // 到达终点,关闭定时器\nwindow.clearInterval(timer)\nprg = 100\n} else {\nprg++\n}\n\n\\$progress.html(prg + '%')\nconsole.log(prg)\n}, 10)  // 时间间隔缩短\n}``````\n\nok,这差不多就是我们想要的功能了,我们来提炼一下代码,把重复的代码给封装一下:\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = 0progress(80, 100)\n\nprogress(100, 10, () => {\n})\n}\n\nfunction progress (dist, delay, callback) {\nwindow.clearInterval(timer)\ntimer = window.setInterval(() => {    if (prg >= dist) {\nwindow.clearInterval(timer)\nprg = dist\ncallback && callback()\n} else {\nprg++\n}\n\n\\$progress.html(prg + '%')\nconsole.log(prg)\n}, delay)\n}``````\n\n1. 进度太平均,相同的时间间隔,相同的增量,不符合网络环境的特点;\n\n3. 每次第一阶段都是在80%就暂停了,露馅儿了;\n\n03让时间间隔随机\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = 0progress([80, 90], [1, 3], 100)  // 使用数组来表示随机数的区间\n\nprogress(100, [1, 5], 10, () => {\n}, 1000)\n})\n}\n\nfunction progress (dist, speed, delay, callback) {\nvar _dist = random(dist)\nvar _delay = random(delay)\nvar _speed = random(speed)\nwindow.clearTimeout(timer)\ntimer = window.setTimeout(() => {    if (prg + _speed >= _dist) {\nwindow.clearTimeout(timer)\nprg = _dist\ncallback && callback()\n} else {\nprg += _speed\nprogress (_dist, speed, delay, callback)\n}\n\n\\$progress.html(parseInt(prg) + '%')  // 留意,由于已经不是自增1,所以这里要取整\nconsole.log(prg)\n}, _delay)\n}\n\nfunction random (n) {  if (typeof n === 'object') {\nvar times = n - n\nvar offset = n    return Math.random() * times + offset\n} else {    return n\n}\n}``````\n\n04超时时间设置\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = 0progress([80, 90], [1, 3], 100)  // 使用数组来表示随机数的区间\n\nprogress(100, [1, 5], 10, () => {\n}, 1000)\n})\n}\n\nwindow.setTimeout(() => {  // 设置5秒的超时时间\nprogress(100, [1, 5], 10, () => {\n}, 1000)\n})\n}, 5000)\n\nfunction progress (dist, speed, delay, callback) {\nvar _dist = random(dist)\nvar _delay = random(delay)\nvar _speed = random(speed)\nwindow.clearTimeout(timer)\ntimer = window.setTimeout(() => {    if (prg + _speed >= _dist) {\nwindow.clearTimeout(timer)\nprg = _dist\ncallback && callback()\n} else {\nprg += _speed\nprogress (_dist, speed, delay, callback)\n}\n\n\\$progress.html(parseInt(prg) + '%')  // 留意,由于已经不是自增1,所以这里要取整\nconsole.log(prg)\n}, _delay)\n}\n\nfunction random (n) {  if (typeof n === 'object') {\nvar times = n - n\nvar offset = n    return Math.random() * times + offset\n} else {    return n\n}\n}``````\n\n``````<!DOCTYPE html>\n<html>\n<script>\n</script>\n<script src=\"index.js\"></script>\n<body>\n<div class=\"progress\" id=\"progress\">0%</div>\n</div>\n</body>\n</html>``````\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = 0var now = new Date()  // 记录当前时间\nvar timeout = 5000  // 超时时间\n\nprogress([80, 90], [1, 3], 100)\n\ncomplete()\ncomplete()\n} else {\nwindow.setTimeout(() => {  // 未超时,则等待剩余时间\ncomplete()\n}\n\nfunction complete () {  // 封装完成进度功能\nprogress(100, [1, 5], 10, () => {\nwindow.setTimeout(() => {\n}, 1000)\n})\n}\n\nfunction progress (dist, speed, delay, callback) {\nvar _dist = random(dist)\nvar _delay = random(delay)\nvar _speed = random(speed)\nwindow.clearTimeout(timer)\ntimer = window.setTimeout(() => {    if (prg + _speed >= _dist) {\nwindow.clearTimeout(timer)\nprg = _dist\ncallback && callback()\n} else {\nprg += _speed\nprogress (_dist, speed, delay, callback)\n}\n\n\\$progress.html(parseInt(prg) + '%')\nconsole.log(prg)\n}, _delay)\n}\n\nfunction random (n) {  if (typeof n === 'object') {\nvar times = n - n\nvar offset = n    return Math.random() * times + offset\n} else {    return n\n}\n}``````\n\n05具体场景分析\n\n1. 我们需要一个能够替我们累计增量的变量next;\n\n``````var \\$loading = \\$('#loading')\nvar \\$progress = \\$('#progress')\nvar prg = 0var timer = 0var now = new Date()\nvar timeout = 5000var next = prg\n\nadd([30, 50], [1, 3], 100)  // 第一阶段\n\nwindow.setTimeout(() => {  // 模拟图a加载完\n}, 1000)\n\nwindow.setTimeout(() => {  // 模拟图c加载完\n}, 2000)\n\nwindow.setTimeout(() => {  // 模拟图b加载完\n}, 2500)\n\ncomplete()\ncomplete()\n} else {\nwindow.setTimeout(() => {\ncomplete()\n}\n\nfunction complete () {\nadd(100, [1, 5], 10, () => {\nwindow.setTimeout(() => {\n}, 1000)\n})\n}\n\nfunction add (dist, speed, delay, callback) {\nvar _dist = random(dist)  if (next + _dist > 100) {  // 对超出部分裁剪对齐\nnext = 100\n} else {\nnext += _dist\n}\n\nprogress(next, speed, delay, callback)\n}\n\nfunction progress (dist, speed, delay, callback) {\nvar _delay = random(delay)\nvar _speed = random(speed)\nwindow.clearTimeout(timer)\ntimer = window.setTimeout(() => {    if (prg + _speed >= dist) {\nwindow.clearTimeout(timer)\nprg = dist\ncallback && callback()\n} else {\nprg += _speed\nprogress (dist, speed, delay, callback)\n}\n\n\\$progress.html(parseInt(prg) + '%')\nconsole.log(prg)\n}, _delay)\n}\n\nfunction random (n) {  if (typeof n === 'object') {\nvar times = n - n\nvar offset = n    return Math.random() * times + offset\n} else {    return n\n}\n}``````\n\n06结束\n\nez-progress 是一个web(伪)进度插件,使用 ez-progress 实现这个功能非常简单:\n\n``````var Progress = require('ez-progress')\nvar prg = new Progress()\n\nvar \\$progress = \\$('#progress')\n\nprg.on('progress', function (res) {\nvar progress = parseInt(res.progress)  // 注意进度取整,不然有可能会出现小数\n\\$progress.html(progress + '%')\n})\n\nprg.go([60, 70], function (res) {\nprg.complete(null, [0, 5], [0, 50])  // 飞一般地冲向终点\n}, [0, 3], [0, 200])\n\nprg.complete(null, [0, 5], [0, 50])  // 飞一般地冲向终点\n}``````", null, "", null, "• 短信API服务\n• 银行卡四元素检测[简]\n• 身份证实名认证\n• 手机状态查询\n• 三网手机实名制认证[简]\n• 身份证OCR识别\n• 证件识别\n• 企业工商信息\n\n• 短信API服务\n• 银行卡四元素检测[简]\n• 身份证实名认证\n• 手机状态查询\n• 三网手机实名制认证[简]\n• 身份证OCR识别\n• 证件识别\n• 企业工商信息\n• 确定", null, "×\n\n0512-88869195\n\nData Drives The Future" ]
[ null, "https://www.juhe.cn/news/index/id/1266", null, "https://www.juhe.cn/news/index/id/1266", null, "https://juheimage.oss-cn-hangzhou.aliyuncs.com/www/landPage/api-free.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5011411,"math_prob":0.99785614,"size":19925,"snap":"2023-40-2023-50","text_gpt3_token_len":9595,"char_repetition_ratio":0.17800312,"word_repetition_ratio":0.99823946,"special_character_ratio":0.33239648,"punctuation_ratio":0.1617139,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98133886,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T23:35:40Z\",\"WARC-Record-ID\":\"<urn:uuid:8f2d553f-b70d-490d-93c8-25ecab3a80fa>\",\"Content-Length\":\"185920\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2475903-2eaa-4394-b721-85b3c2f03b57>\",\"WARC-Concurrent-To\":\"<urn:uuid:4e5fd295-857d-480a-9bdd-28674cbb0050>\",\"WARC-IP-Address\":\"203.107.54.210\",\"WARC-Target-URI\":\"https://www.juhe.cn/news/index/id/1266\",\"WARC-Payload-Digest\":\"sha1:4SA5HQ6SKBSWJPNFFKKTHG23ABK3B4DN\",\"WARC-Block-Digest\":\"sha1:PIF7V3D5XSBFQUC5F4QZAAPBOX4JBKC6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506539.13_warc_CC-MAIN-20230923231031-20230924021031-00065.warc.gz\"}"}
https://www.libraw.org/comment/reply/2549/5622
[ "### I have generated the\n\nI have generated the backtrace from valgrind. Here you go:\n\n==23395== Process terminating with default action of signal 11 (SIGSEGV)\n==23395== at 0x4EC4DBB: LibRaw::adobe_coeff(unsigned int, char const*, int) (colordata.cpp:1697)\n==23395== by 0x4EAC172: LibRaw::GetNormalizedModel() (normalize_model.cpp:1157)\n==23395== by 0x4E9C07D: LibRaw::identify() (identify.cpp:1071)\n==23395== by 0x4EC7C95: LibRaw::open_datastream(LibRaw_abstract_datastream*) (open.cpp:376)\n==23395== by 0x4EC94F0: LibRaw::open_file(char const*, long long) (open.cpp:52)\n==23395== by 0x400DDC: main (simple_dcraw.cpp:122)\n==23395==\n==23395== HEAP SUMMARY:\n==23395== in use at exit: 298,973 bytes in 9 blocks\n==23395== total heap usage: 27 allocs, 18 frees, 343,743 bytes allocated\n==23395==\n==23395== Searching for pointers to 9 not-freed blocks\n==23395== Checked 1,382,688 bytes\n==23395==\n==23395== 8 bytes in 1 blocks are still reachable in loss record 1 of 9\n==23395== at 0x4C29E96: malloc (vg_replace_malloc.c:309)\n==23395== by 0x5E1A868: ??? (in /usr/lib64/libgomp.so.1.0.0)\n==23395== by 0x5E2968B: ??? (in /usr/lib64/libgomp.so.1.0.0)\n==23395== by 0x5E18EC6: ??? (in /usr/lib64/libgomp.so.1.0.0)\n==23395== by 0x400F4C2: _dl_init (in /usr/lib64/ld-2.17.so)\n==23395== by 0x40011A9: ??? (in /usr/lib64/ld-2.17.so)\n==23395== by 0x2: ???\n==23395== by 0x1FFF0003D2: ???\n==23395== by 0x1FFF0003DF: ???\n==23395== by 0x1FFF0003E2: ???\n==23395==\n==23395== 37 bytes in 1 blocks are still reachable in loss record 2 of 9\n==23395== at 0x4C2A4B6: operator new(unsigned long) (vg_replace_malloc.c:344)\n==23395== by 0x58C4CF8: std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator const&) (in /usr/lib64/libstdc++.so.6.0.19)\n==23395== by 0x58C6580: char* std::string::_S_construct(char const*, char const*, std::allocator const&, std::forward_iterator_tag) (in /usr/lib64/libstdc++.so.6.0.19)\n==23395== by 0x58C69B7: std::basic_string, std::allocator >::basic_string(char const*, std::allocator const&) (in /usr/lib64/libstdc++.so.6.0.19)\n==23395== by 0x4E56AA5: LibRaw_file_datastream::LibRaw_file_datastream(char const*) (libraw_datastream.cpp:56)\n==23395== by 0x4EC948D: LibRaw::open_file(char const*, long long) (open.cpp:38)\n==23395== by 0x400DDC: main (simple_dcraw.cpp:122)\n==23395==\n==23395== 40 bytes in 1 blocks are still reachable in loss record 3 of 9\n==23395== at 0x4C2A4B6: operator new(unsigned long) (vg_replace_malloc.c:344)\n==23395== by 0x4EC947F: LibRaw::open_file(char const*, long long) (open.cpp:38)\n==23395== by 0x400DDC: main (simple_dcraw.cpp:122)\n==23395==\n==23395== 240 bytes in 1 blocks are still reachable in loss record 4 of 9\n==23395== at 0x4C2A4B6: operator new(unsigned long) (vg_replace_malloc.c:344)\n==23395== by 0x4E56AEA: LibRaw_file_datastream::LibRaw_file_datastream(char const*) (libraw_datastream.cpp:70)\n==23395== by 0x4EC948D: LibRaw::open_file(char const*, long long) (open.cpp:38)\n==23395== by 0x400DDC: main (simple_dcraw.cpp:122)\n==23395==\n==23395== 568 bytes in 1 blocks are still reachable in loss record 5 of 9\n==23395== at 0x4C29E96: malloc (vg_replace_malloc.c:309)\n==23395== by 0x64D458C: __fopen_internal (in /usr/lib64/libc-2.17.so)\n==23395== by 0x58834FF: std::__basic_file::open(char const*, std::_Ios_Openmode, int) (in /usr/lib64/libstdc++.so.6.0.19)\n==23395== by 0x58BE7F9: std::basic_filebuf >::open(char const*, std::_Ios_Openmode) (in /usr/lib64/libstdc++.so.6.0.19)\n==23395== by 0x4E56B06: LibRaw_file_datastream::LibRaw_file_datastream(char const*) (libraw_datastream.cpp:74)\n==23395== by 0x4EC948D: LibRaw::open_file(char const*, long long) (open.cpp:38)\n==23395== by 0x400DDC: main (simple_dcraw.cpp:122)\n==23395==\n==23395== 2,560 bytes in 1 blocks are still reachable in loss record 6 of 9\n==23395== at 0x4C29E96: malloc (vg_replace_malloc.c:309)\n==23395== by 0x4ECC062: malloc (libraw_alloc.h:49)\n==23395== by 0x4ECC062: LibRaw::malloc(unsigned long) (utils_libraw.cpp:239)\n==23395== by 0x4EA59D4: LibRaw::parse_makernote(int, int) (makernotes.cpp:527)\n==23395== by 0x4E93820: LibRaw::parseCR3(unsigned long long, unsigned long long, short&, char*, short&, short&) (cr3_parser.cpp:503)\n==23395== by 0x4E9331D: LibRaw::parseCR3(unsigned long long, unsigned long long, short&, char*, short&, short&) (cr3_parser.cpp:519)\n==23395== by 0x4E9331D: LibRaw::parseCR3(unsigned long long, unsigned long long, short&, char*, short&, short&) (cr3_parser.cpp:519)\n==23395== by 0x4E9331D: LibRaw::parseCR3(unsigned long long, unsigned long long, short&, char*, short&, short&) (cr3_parser.cpp:519)\n==23395== by 0x4E9331D: LibRaw::parseCR3(unsigned long long, unsigned long long, short&, char*, short&, short&) (cr3_parser.cpp:519)\n==23395== by 0x4E9331D: LibRaw::parseCR3(unsigned long long, unsigned long long, short&, char*, short&, short&) (cr3_parser.cpp:519)\n==23395== by 0x4E9DC41: LibRaw::identify() (identify.cpp:719)\n==23395== by 0x4EC7C95: LibRaw::open_datastream(LibRaw_abstract_datastream*) (open.cpp:376)\n==23395== by 0x4EC94F0: LibRaw::open_file(char const*, long long) (open.cpp:52)\n==23395==\n==23395== 4,096 bytes in 1 blocks are still reachable in loss record 7 of 9\n==23395== at 0x4C29E96: malloc (vg_replace_malloc.c:309)\n==23395== by 0x4EC670C: libraw_memmgr (libraw_alloc.h:36)\n==23395== by 0x4EC670C: LibRaw::LibRaw(unsigned int) (init_close_utils.cpp:26)\n==23395== by 0x400D75: main (simple_dcraw.cpp:64)\n==23395==\n==23395== 8,192 bytes in 1 blocks are still reachable in loss record 8 of 9\n==23395== at 0x4C2AB5B: operator new[](unsigned long) (vg_replace_malloc.c:433)\n==23395== by 0x58BE1CB: std::basic_filebuf >::_M_allocate_internal_buffer() (in /usr/lib64/libstdc++.so.6.0.19)\n==23395== by 0x58BE811: std::basic_filebuf >::open(char const*, std::_Ios_Openmode) (in /usr/lib64/libstdc++.so.6.0.19)\n==23395== by 0x4E56B06: LibRaw_file_datastream::LibRaw_file_datastream(char const*) (libraw_datastream.cpp:74)\n==23395== by 0x4EC948D: LibRaw::open_file(char const*, long long) (open.cpp:38)\n==23395== by 0x400DDC: main (simple_dcraw.cpp:122)\n==23395==\n==23395== 283,232 bytes in 1 blocks are definitely lost in loss record 9 of 9\n==23395== at 0x4C2A4B6: operator new(unsigned long) (vg_replace_malloc.c:344)\n==23395== by 0x4EC6BFF: LibRaw::LibRaw(unsigned int) (init_close_utils.cpp:104)\n==23395== by 0x400D75: main (simple_dcraw.cpp:64)\n==23395==\n==23395== LEAK SUMMARY:\n==23395== definitely lost: 283,232 bytes in 1 blocks\n==23395== indirectly lost: 0 bytes in 0 blocks\n==23395== possibly lost: 0 bytes in 0 blocks\n==23395== still reachable: 15,741 bytes in 8 blocks\n==23395== of which reachable via heuristic:\n==23395== stdstring : 37 bytes in 1 blocks\n==23395== suppressed: 0 bytes in 0 blocks\n==23395==\n==23395== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)\nSegmentation fault\n[siddaharth.suman@sid samples]\\$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64567775,"math_prob":0.92284876,"size":13750,"snap":"2023-40-2023-50","text_gpt3_token_len":5012,"char_repetition_ratio":0.25541976,"word_repetition_ratio":0.993865,"special_character_ratio":0.45454547,"punctuation_ratio":0.29033417,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99489915,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T14:02:43Z\",\"WARC-Record-ID\":\"<urn:uuid:2b90d417-1c69-4ea6-8dd5-05c9b9242e48>\",\"Content-Length\":\"44686\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:44c4f63b-f828-4866-9c27-36306258414b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0eb31c6-b192-44da-8f34-463c821caca3>\",\"WARC-IP-Address\":\"192.95.29.165\",\"WARC-Target-URI\":\"https://www.libraw.org/comment/reply/2549/5622\",\"WARC-Payload-Digest\":\"sha1:DXPR6NDA7PUEYTGXOGPAUJM7FQWGLIUE\",\"WARC-Block-Digest\":\"sha1:CGKKKP3EV7JESP63W2IJ74ZDFOTUOBVI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100529.8_warc_CC-MAIN-20231204115419-20231204145419-00012.warc.gz\"}"}
https://aimerneige.com/zh/post/python/20-python-skills/
[ "## 1 反转字符串#\n\n``````# Reversing a string using slicing\n\nmy_string = \"ABCDE\"\nreversed_string = my_string[::-1]\n\nprint(reversed_string)\n\n# Output\n# EDCBA\n``````\n\n``````// [a🅱️c]\nfor (int i = a, i < b; i = i + c) {\n// do something\n}\n``````\n\n## 2 转化为标题(首字母大写)#\n\n``````my_string = \"my name is chaitanya baweja\"\n\n# using the title() function of string class\nnew_string = my_string.title()\n\nprint(new_string)\n\n# Output\n# My Name Is Chaitanya Baweja\n``````\n\n### 英文标题首字母大写规则#\n\n1. 标题的第一个单词的首字母要大写;\n2. 冠词都不需要大写;\n3. 字母个数多于 3 个(不含 3 个)的介词、连词的首字母要大写;\n4. 名词、动词、形容词、副词、代词、感叹词首字母应大写;\n5. 大写所有英语中要求大写的单词。如月份、人名、地名等等。\n\n``````test_str = \"the story of an magical apple and me\"\ntitle_str = test_str.title()\nprint(title_str)\n\n# Output\n# The Story Of An Magical Apple And Me\n``````\n\n``````my_string = \"helloworld\"\nnew_string = my_string.title()\nprint(new_string)\n\n# Output\n# Helloworld\n``````\n\n## 3 查找字符串的唯一元素#\n\n``````my_string = \"aavvccccddddeee\"\n\n# converting the string to a set\ntemp_set = set(my_string)\n\n# stitching set into a string using join\nnew_string = ''.join(temp_set)\n\nprint(new_string)\n\n# Output\n# ecdva\n``````\n\n``````my_string = \"the quick brown fox jumps over the lazy dog.\"\ntemp_set = set(my_string)\nnew_string = ''.join(temp_set)\nprint(new_string)\n\n# Output\n# trzixup.efbajovcdnslywhg qmk\n``````\n\n## 4 输出 n 次字符串或列表#\n\n``````n = 3 # number of repetitions\n\nmy_string = \"abcd\"\nmy_list = [1,2,3]\n\nprint(my_string*n)\n# abcdabcdabcd\n\nprint(my_list*n)\n# [1,2,3,1,2,3,1,2,3]\n``````\n\n``````n = 4\nmy_list = *n # n denotes the length of the required list\n# [0, 0, 0, 0]\n``````\n\n## 5 列表解析#\n\n``````# Multiplying each element in a list by 2\n\noriginal_list = [1,2,3,4]\n\nnew_list = [2*x for x in original_list]\n\nprint(new_list)\n# [2,4,6,8]\n``````\n\n## 6 交换俩个变量的值#\n\nPython 使得交换俩个变量的值变得十分简单,并不需要使用另一个变量。\n\n``````a = 1\nb = 2\n\na, b = b, a\n\nprint(a) # 2\nprint(b) # 1\n``````\n\n## 7 把字符串分解为子字符串列表#\n\n``````string_1 = \"My name is Chaitanya Baweja\"\nstring_2 = \"sample/ string 2\"\n\n# default separator ' '\nprint(string_1.split())\n# ['My', 'name', 'is', 'Chaitanya', 'Baweja']\n\n# defining separator as '/'\nprint(string_2.split('/'))\n# ['sample', ' string 2']\n``````\n\n## 8 将字符串列表组合成单字符串#\n\n`join()`方法可以通过传入的参数作为间隔把字符串数组组合成单字符串。\n\n``````list_of_strings = ['My', 'name', 'is', 'Chaitanya', 'Baweja']\n\n# Using join with the comma separator\nprint(','.join(list_of_strings))\n\n# Output\n# My,name,is,Chaitanya,Baweja\n``````\n\n## 9 检查给定字符串是否为回文(Palindrome)#\n\n``````my_string = \"abcba\"\n\nif my_string == my_string[::-1]:\nprint(\"palindrome\")\nelse:\nprint(\"not palindrome\")\n\n# Output\n# palindrome\n``````\n\n## 10 列表中元素出现的次数#\n\nPython 的 Counter 会记录容器内每个元素的出现次数。`Counter()` 返回一个字典,字典内以元素为键,出现次数为值。\n\n``````# finding frequency of each element in a list\nfrom collections import Counter\n\nmy_list = ['a','a','b','b','b','c','d','d','d','d','d']\ncount = Counter(my_list) # defining a counter object\n\nprint(count) # Of all elements\n# Counter({'d': 5, 'b': 3, 'a': 2, 'c': 1})\n\nprint(count['b']) # of individual element\n# 3\n\nprint(count.most_common(1)) # most frequent element\n# [('d', 5)]\n``````\n\n## 11 判断俩个字符串是否为易位词#\n\nCounter 的一个有趣的应用是判断易位词。\n\n``````from collections import Counter\n\nstr_1, str_2, str_3 = \"acbde\", \"abced\", \"abcda\"\ncnt_1, cnt_2, cnt_3 = Counter(str_1), Counter(str_2), Counter(str_3)\n\nif cnt_1 == cnt_2:\nprint('1 and 2 anagram')\nif cnt_1 == cnt_3:\nprint('1 and 3 anagram')\n\n# Output\n# 1 and 2 anagram\n``````\n\n## 12 使用 try-except-else#\n\n``````a, b = 1,0\n\ntry:\nprint(a/b)\n# exception raised when b is 0\nexcept ZeroDivisionError:\nprint(\"division by zero\")\nelse:\nprint(\"no exceptions raised\")\nfinally:\nprint(\"Run this always\")\n\n# Output\n# division by zero\n# Run this always\n``````\n``````try:\nsome_dangerous_operation()\n# something may cause exception\nexcept:\nexception_handle()\n# something run when the exception happens\nelse:\nno_exception()\n# something run when no exception happens\nfinally:\nsomething_must_to_been_done()\n# something run weather the exception happens or not\n``````\n\n## 13 使用 Enumerate 来取得 索引-值 对#\n\n``````my_list = ['a', 'b', 'c', 'd', 'e']\n\nfor index, value in enumerate(my_list):\nprint('{0}: {1}'.format(index, value))\n\n# 0: a\n# 1: b\n# 2: c\n# 3: d\n# 4: e\n``````\n\n## 14 检查对象的内存使用#\n\n``````import sys\n\nnum = 21\n\nprint(sys.getsizeof(num))\n\n# In Python 2, 24\n# In Python 3, 28\n``````\n\n## 15 合并俩个字典#\n\nPython 3.5 简化了这个过程。\n\n``````dict_1 = {'apple': 9, 'banana': 6}\ndict_2 = {'banana': 4, 'orange': 8}\n\ncombined_dict = {**dict_1, **dict_2}\n\nprint(combined_dict)\n# Output\n# {'apple': 9, 'banana': 4, 'orange': 8}\n``````\n\n## 16 执行一段代码所需时间#\n\n``````import time\n\nstart_time = time.time()\n# Code to check follows\na, b = 1,2\nc = a+ b\n# Code to check ends\nend_time = time.time()\ntime_taken_in_micro = (end_time- start_time)*(10**6)\n\nprint(\" Time taken in micro_seconds: {0} ms\").format(time_taken_in_micro)\n``````\n\n## 17 嵌套列表扁平化#\n\n``````from iteration_utilities import deepflatten\n\n# if you only have one depth nested_list, use this\ndef flatten(l):\nreturn [item for sublist in l for item in sublist]\n\nl = [[1,2,3],]\nprint(flatten(l))\n# [1, 2, 3, 3]\n\n# if you don't know how deep the list is nested\nl = [[1,2,3],[4,,[6,7]],[8,[9,]]]\n\nprint(list(deepflatten(l, depth=3)))\n# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n``````\n\n## 18 列表取样#\n\n``````import random\n\nmy_list = ['a', 'b', 'c', 'd', 'e']\nnum_samples = 2\n\nsamples = random.sample(my_list,num_samples)\nprint(samples)\n# [ 'a', 'e'] this will have any 2 random values\n``````\n\n``````import secrets # imports secure module.\nsecure_random = secrets.SystemRandom() # creates a secure random object.\n\nmy_list = ['a','b','c','d','e']\nnum_samples = 2\n\nsamples = secure_random.sample(my_list, num_samples)\n\nprint(samples)\n# [ 'e', 'd'] this will have any 2 random values\n``````\n\n## 19 数字化#\n\n``````num = 123456\n\n# using map\nlist_of_digits = list(map(int, str(num)))\n\nprint(list_of_digits)\n# [1, 2, 3, 4, 5, 6]\n\n# using list comprehension\nlist_of_digits = [int(x) for x in str(num)]\n\nprint(list_of_digits)\n# [1, 2, 3, 4, 5, 6]\n``````\n\n## 20 检查唯一性#\n\n``````def unique(l):\nif len(l)==len(set(l)):\nprint(\"All elements are unique\")\nelse:\nprint(\"List has duplicates\")\n\nunique([1,2,3,4])\n# All elements are unique\n\nunique([1,1,2,3])\n# List has duplicates\n``````" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.61906874,"math_prob":0.89731336,"size":8008,"snap":"2023-40-2023-50","text_gpt3_token_len":4042,"char_repetition_ratio":0.11356822,"word_repetition_ratio":0.04743083,"special_character_ratio":0.29982516,"punctuation_ratio":0.14876033,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98803365,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T02:19:02Z\",\"WARC-Record-ID\":\"<urn:uuid:8f6a352d-bcc3-4824-b4e0-cc0df55007a9>\",\"Content-Length\":\"84311\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be8d22b9-1171-4828-80b6-b1f7f3c4e90d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a044f4df-9677-42f6-9280-eda1e38e1c15>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://aimerneige.com/zh/post/python/20-python-skills/\",\"WARC-Payload-Digest\":\"sha1:KTSFKZOXWLP5G5BZZHGR6LB6K5VESP5F\",\"WARC-Block-Digest\":\"sha1:77VGEPOWGZG3YENFHFPAK625A5GMQF3C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511351.18_warc_CC-MAIN-20231004020329-20231004050329-00476.warc.gz\"}"}
https://au.mathworks.com/matlabcentral/answers/731353-output-result-s-format?s_tid=prof_contriblnk
[ "# Output result's format?\n\n3 views (last 30 days)\nI am executing a great-script which has inside it another mini-scripts.\nThose scripts execute mathematical operations in order to calculate and show results of differente variables.\nThe issue is that Matlab do show those results correctly but some of the in this strange format:", null, "How can I solve this problem and achieve Matlab giving me the results in a normal format? I think that this doesn`t happen if I execute those mini-scripts separately and individualy. It only happens when I execute the great-script. I also have tried writing\nformat short\nIn the head of each mini-script but seems that this didn't fix the issue.\nThanks!\nStephen23 on 6 Feb 2021\nThe variables EVfiT and EVfiT1 are symbolic or character or possibly some custom class... they are certainly not numeric, so the format command is totally irrelevant.\n\nJohn D'Errico on 5 Feb 2021\nThese are symbolic expressions you are showing. The format command has NOTHING to do with symbolic expressions. It applies only to numeric values, thus typically double precision numbers.\nX = 1 + sqrt(sym('13445664/2424536343636'))\nX =", null, "X\nX =", null, "format short\nX\nX =", null, "format rat\nX\nX =", null, "As you see, anything I do with format does not touch how we see X.\nI cannot replicate what you did, because you show only a picture of your output. (If you really want help, then paste in the actual code, as text.)\nBut now, if I convert X to a double,\nformat short\nXd = double(X)\nXd = 1.0024\nformat rat\nXd\nXd =\n1277/1274\nformat long g\nXd\nXd =\n1.00235492336053\nNow you see that format works, and it works as designed. I can turn it on and off on a whim. It applies to floating point numbers. The numbers that you show in that picture LOOK like numbers. But format does not understand symbolic expressions. It ignores them.\nErikJon Pérez Mardaras on 7 Feb 2021\nThanks a lot for your replies and your help, I just have found where was the mistake.\nThe point is that I had in the first mini-script this command\n[numer,deno]=numden(sym(i));\nWhich calculated numer and deno as symbolic expressions and thus, the rest of the variables calculated along the code, which contains in their expressions numer or deno, are also symbolic expressions and that's why in my results appeared the results as quotients and not as decimal numbers.\nI have fixed the issue by converting numer and deno to double expressions just like mate darova said in his comment\nnumerr=double(numer);\ndenoo=double(deno);\n\ndarova on 30 Jan 2021\ntry double\na = 2;\nb = 1/sym(a);\nc = b/a\ndouble(c)\nWalter Roberson on 6 Feb 2021\n\"format\" is a command, not a local setting for functions or subroutines.\ndisp('hello')\ncommand inside a function, would you expect the output of the function to disappear from the command window when the function returned? NO -- you would recognize that disp() changes state permanently, rather than disp() being a local setting within the function. Just so, \"format\" changes are permanent until changed.\n\nR2019b\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!" ]
[ null, "https://www.mathworks.com/matlabcentral/answers/uploaded_files/504598/image.png", null, "https://www.mathworks.com/matlabcentral/answers/uploaded_files/510617/image.png", null, "https://www.mathworks.com/matlabcentral/answers/uploaded_files/510622/image.png", null, "https://www.mathworks.com/matlabcentral/answers/uploaded_files/510627/image.png", null, "https://www.mathworks.com/matlabcentral/answers/uploaded_files/510632/image.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89157766,"math_prob":0.78547114,"size":2561,"snap":"2022-05-2022-21","text_gpt3_token_len":607,"char_repetition_ratio":0.10285491,"word_repetition_ratio":0.0,"special_character_ratio":0.23584537,"punctuation_ratio":0.09437751,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9598152,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T00:23:14Z\",\"WARC-Record-ID\":\"<urn:uuid:92d5c261-baaa-4cdb-ba08-b39987ef5c27>\",\"Content-Length\":\"229283\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0855eee9-cdfa-4540-9c1e-df6d2457ed96>\",\"WARC-Concurrent-To\":\"<urn:uuid:dddc3626-9ed5-4715-bc55-55ac69fbf53d>\",\"WARC-IP-Address\":\"23.1.9.244\",\"WARC-Target-URI\":\"https://au.mathworks.com/matlabcentral/answers/731353-output-result-s-format?s_tid=prof_contriblnk\",\"WARC-Payload-Digest\":\"sha1:FNG7XYOMF7U5I3AQGVV7OKTQWTFC355B\",\"WARC-Block-Digest\":\"sha1:P6FL2FRUGOEQ2UF5KUBBZF5Q75HWNMTN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662515466.5_warc_CC-MAIN-20220516235937-20220517025937-00758.warc.gz\"}"}
https://ixtrieve.fh-koeln.de/birds/litie/document/19029
[ "# Document (#19029)\n\nEditor\nSchreiber, K., S. Krauch u. S. Hedrich\nTitle\nInformationsmittel für Bibliotheken : Laufend kumulierendes Register\nIssue\n1/5. 1993/97 (Generalregister).\nImprint\nBerlin : Dbi\nYear\n1998\nPages\n369 S\nIsbn\n3-87068-546-8\nSeries\nInformationsmittel für Bibliotheken; Beih.6\nAbstract\nDas laufend kumulierende Register zu 'Informationsmittel für Bibliotheken', dessen vierte Ausgabe für die Berichtszeit 1/5.1993/97 hiermit vorgelegt wird, ersetzt als Generalregister der ersten 5 Jahrgänge von IfB alle vorhergehende Register. Enthält 2.438 Einträge. Ab Jg.6 wird eine neue laufend kumulierende Folge einsetzen\nFootnote\nRez. in: ZfBB 45(1998) H.12, S.664-665 (G. Rost); IfB 6(1998) H.3/4, S.301-304 (T. Seela)\nTheme\nBibliographie\nInformationsmittel\nLiteraturübersicht\n\n## Similar documents (content)\n\n1. Gödert, W.; Oßwald, A.; Rösch, H.; Sleegers, P.: Evit@: Evaluation elektronischer Informationsmittel (2000) 0.33\n```0.32854635 = sum of:\n0.32854635 = product of:\n1.6427317 = sum of:\n0.015149567 = weight(abstract_txt:eine in 2779) [ClassicSimilarity], result of:\n0.015149567 = score(doc=2779,freq=2.0), product of:\n0.032748647 = queryWeight, product of:\n3.4891577 = idf(docFreq=3657, maxDocs=44083)\n0.009385832 = queryNorm\n0.4626013 = fieldWeight in 2779, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.4891577 = idf(docFreq=3657, maxDocs=44083)\n0.09375 = fieldNorm(doc=2779)\n0.03283565 = weight(abstract_txt:alle in 2779) [ClassicSimilarity], result of:\n0.03283565 = score(doc=2779,freq=1.0), product of:\n0.06910354 = queryWeight, product of:\n1.4526248 = boost\n5.068437 = idf(docFreq=753, maxDocs=44083)\n0.009385832 = queryNorm\n0.47516596 = fieldWeight in 2779, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.068437 = idf(docFreq=753, maxDocs=44083)\n0.09375 = fieldNorm(doc=2779)\n0.06073249 = weight(abstract_txt:dessen in 2779) [ClassicSimilarity], result of:\n0.06073249 = score(doc=2779,freq=1.0), product of:\n0.10412394 = queryWeight, product of:\n1.783112 = boost\n6.221559 = idf(docFreq=237, maxDocs=44083)\n0.009385832 = queryNorm\n0.58327115 = fieldWeight in 2779, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.221559 = idf(docFreq=237, maxDocs=44083)\n0.09375 = fieldNorm(doc=2779)\n1.534014 = weight(title_txt:informationsmittel in 2779) [ClassicSimilarity], result of:\n1.534014 = score(doc=2779,freq=1.0), product of:\n0.36997035 = queryWeight, product of:\n4.7533717 = boost\n8.292632 = idf(docFreq=29, maxDocs=44083)\n0.009385832 = queryNorm\n4.146316 = fieldWeight in 2779, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.292632 = idf(docFreq=29, maxDocs=44083)\n0.5 = fieldNorm(doc=2779)\n0.2 = coord(4/20)\n```\n2. Schreiber, K.: Biographische Informationsmittel : Typologie mit Beispielen. Rezensionen von 836 allgemeinen und fachlichen Sammelbiographien von Anfang der neunziger Jahre bis Ende 1998; samt einem Verzeichnis mit Schlagwortregister aller von 1974-1993 in der Rubrik Ausgewählte Bibliographien und andere Nachschlagewerke der Zeitschrift für Bibliothekswesen und Bibliographie sowie in IfB 1(1993) - 6(1998) besprochenen biographischen Informationsmittel (1999) 0.27\n```0.26781654 = sum of:\n0.26781654 = product of:\n1.0712662 = sum of:\n0.0142831495 = weight(abstract_txt:eine in 5019) [ClassicSimilarity], result of:\n0.0142831495 = score(doc=5019,freq=1.0), product of:\n0.032748647 = queryWeight, product of:\n3.4891577 = idf(docFreq=3657, maxDocs=44083)\n0.009385832 = queryNorm\n0.4361447 = fieldWeight in 5019, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.4891577 = idf(docFreq=3657, maxDocs=44083)\n0.125 = fieldNorm(doc=5019)\n0.07587383 = weight(abstract_txt:enthält in 5019) [ClassicSimilarity], result of:\n0.07587383 = score(doc=5019,freq=1.0), product of:\n0.09970234 = queryWeight, product of:\n1.7448416 = boost\n6.0880275 = idf(docFreq=271, maxDocs=44083)\n0.009385832 = queryNorm\n0.76100343 = fieldWeight in 5019, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.0880275 = idf(docFreq=271, maxDocs=44083)\n0.125 = fieldNorm(doc=5019)\n0.036068898 = weight(abstract_txt:wird in 5019) [ClassicSimilarity], result of:\n0.036068898 = score(doc=5019,freq=1.0), product of:\n0.0765143 = queryWeight, product of:\n2.1616712 = boost\n3.771206 = idf(docFreq=2758, maxDocs=44083)\n0.009385832 = queryNorm\n0.47140074 = fieldWeight in 5019, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.771206 = idf(docFreq=2758, maxDocs=44083)\n0.125 = fieldNorm(doc=5019)\n0.54235584 = weight(title_txt:informationsmittel in 5019) [ClassicSimilarity], result of:\n0.54235584 = score(doc=5019,freq=2.0), product of:\n0.36997035 = queryWeight, product of:\n4.7533717 = boost\n8.292632 = idf(docFreq=29, maxDocs=44083)\n0.009385832 = queryNorm\n1.465944 = fieldWeight in 5019, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n8.292632 = idf(docFreq=29, maxDocs=44083)\n0.125 = fieldNorm(doc=5019)\n0.40268448 = weight(abstract_txt:register in 5019) [ClassicSimilarity], result of:\n0.40268448 = score(doc=5019,freq=1.0), product of:\n0.43751645 = queryWeight, product of:\n6.3308372 = boost\n7.363096 = idf(docFreq=75, maxDocs=44083)\n0.009385832 = queryNorm\n0.920387 = fieldWeight in 5019, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.363096 = idf(docFreq=75, maxDocs=44083)\n0.125 = fieldNorm(doc=5019)\n0.25 = coord(5/20)\n```\n3. Gödert, W.; Oßwald, A.; Rösch, H.; Sleegers, P.: Evit@: Evaluation elektronischer Informationsmittel (2000) 0.25\n```0.24796332 = sum of:\n0.24796332 = product of:\n1.6530888 = sum of:\n0.017853938 = weight(abstract_txt:eine in 2882) [ClassicSimilarity], result of:\n0.017853938 = score(doc=2882,freq=1.0), product of:\n0.032748647 = queryWeight, product of:\n3.4891577 = idf(docFreq=3657, maxDocs=44083)\n0.009385832 = queryNorm\n0.5451809 = fieldWeight in 2882, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.4891577 = idf(docFreq=3657, maxDocs=44083)\n0.15625 = fieldNorm(doc=2882)\n0.101220824 = weight(abstract_txt:dessen in 2882) [ClassicSimilarity], result of:\n0.101220824 = score(doc=2882,freq=1.0), product of:\n0.10412394 = queryWeight, product of:\n1.783112 = boost\n6.221559 = idf(docFreq=237, maxDocs=44083)\n0.009385832 = queryNorm\n0.9721186 = fieldWeight in 2882, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.221559 = idf(docFreq=237, maxDocs=44083)\n0.15625 = fieldNorm(doc=2882)\n1.534014 = weight(title_txt:informationsmittel in 2882) [ClassicSimilarity], result of:\n1.534014 = score(doc=2882,freq=1.0), product of:\n0.36997035 = queryWeight, product of:\n4.7533717 = boost\n8.292632 = idf(docFreq=29, maxDocs=44083)\n0.009385832 = queryNorm\n4.146316 = fieldWeight in 2882, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.292632 = idf(docFreq=29, maxDocs=44083)\n0.5 = fieldNorm(doc=2882)\n0.15 = coord(3/20)\n```\n4. Informationsmittel für Bibliotheken (IFB) : Besprechungsdienst und Berichte (1997) 0.19\n```0.18726905 = sum of:\n0.18726905 = product of:\n1.2484603 = sum of:\n0.0663896 = weight(abstract_txt:enthält in 2029) [ClassicSimilarity], result of:\n0.0663896 = score(doc=2029,freq=1.0), product of:\n0.09970234 = queryWeight, product of:\n1.7448416 = boost\n6.0880275 = idf(docFreq=271, maxDocs=44083)\n0.009385832 = queryNorm\n0.665878 = fieldWeight in 2029, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.0880275 = idf(docFreq=271, maxDocs=44083)\n0.109375 = fieldNorm(doc=2029)\n0.031560287 = weight(abstract_txt:wird in 2029) [ClassicSimilarity], result of:\n0.031560287 = score(doc=2029,freq=1.0), product of:\n0.0765143 = queryWeight, product of:\n2.1616712 = boost\n3.771206 = idf(docFreq=2758, maxDocs=44083)\n0.009385832 = queryNorm\n0.41247565 = fieldWeight in 2029, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.771206 = idf(docFreq=2758, maxDocs=44083)\n0.109375 = fieldNorm(doc=2029)\n1.1505104 = weight(title_txt:informationsmittel in 2029) [ClassicSimilarity], result of:\n1.1505104 = score(doc=2029,freq=1.0), product of:\n0.36997035 = queryWeight, product of:\n4.7533717 = boost\n8.292632 = idf(docFreq=29, maxDocs=44083)\n0.009385832 = queryNorm\n3.109737 = fieldWeight in 2029, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.292632 = idf(docFreq=29, maxDocs=44083)\n0.375 = fieldNorm(doc=2029)\n0.15 = coord(3/20)\n```\n5. Sandner, M.: DDC-DACHS : Bericht über die Eröffnung der von der VÖB geförderten Ausstellung ddc.deutsch \"Die Dewey-Dezimalklassifikation und der deutschsprachige Raum\" (2005) 0.15\n```0.14712794 = sum of:\n0.14712794 = product of:\n0.42036557 = sum of:\n0.008837247 = weight(abstract_txt:eine in 4355) [ClassicSimilarity], result of:\n0.008837247 = score(doc=4355,freq=2.0), product of:\n0.032748647 = queryWeight, product of:\n3.4891577 = idf(docFreq=3657, maxDocs=44083)\n0.009385832 = queryNorm\n0.26985076 = fieldWeight in 4355, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.4891577 = idf(docFreq=3657, maxDocs=44083)\n0.0546875 = fieldNorm(doc=4355)\n0.01915413 = weight(abstract_txt:alle in 4355) [ClassicSimilarity], result of:\n0.01915413 = score(doc=4355,freq=1.0), product of:\n0.06910354 = queryWeight, product of:\n1.4526248 = boost\n5.068437 = idf(docFreq=753, maxDocs=44083)\n0.009385832 = queryNorm\n0.27718017 = fieldWeight in 4355, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.068437 = idf(docFreq=753, maxDocs=44083)\n0.0546875 = fieldNorm(doc=4355)\n0.026684593 = weight(abstract_txt:ersten in 4355) [ClassicSimilarity], result of:\n0.026684593 = score(doc=4355,freq=1.0), product of:\n0.086198375 = queryWeight, product of:\n1.6223811 = boost\n5.660743 = idf(docFreq=416, maxDocs=44083)\n0.009385832 = queryNorm\n0.3095719 = fieldWeight in 4355, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.660743 = idf(docFreq=416, maxDocs=44083)\n0.0546875 = fieldNorm(doc=4355)\n0.044044923 = weight(abstract_txt:ausgabe in 4355) [ClassicSimilarity], result of:\n0.044044923 = score(doc=4355,freq=1.0), product of:\n0.12038968 = queryWeight, product of:\n1.9173348 = boost\n6.689883 = idf(docFreq=148, maxDocs=44083)\n0.009385832 = queryNorm\n0.36585298 = fieldWeight in 4355, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.689883 = idf(docFreq=148, maxDocs=44083)\n0.0546875 = fieldNorm(doc=4355)\n0.0386533 = weight(abstract_txt:wird in 4355) [ClassicSimilarity], result of:\n0.0386533 = score(doc=4355,freq=6.0), product of:\n0.0765143 = queryWeight, product of:\n2.1616712 = boost\n3.771206 = idf(docFreq=2758, maxDocs=44083)\n0.009385832 = queryNorm\n0.50517744 = fieldWeight in 4355, product of:\n2.4494898 = tf(freq=6.0), with freq of:\n6.0 = termFreq=6.0\n3.771206 = idf(docFreq=2758, maxDocs=44083)\n0.0546875 = fieldNorm(doc=4355)\n0.03131719 = weight(abstract_txt:bibliotheken in 4355) [ClassicSimilarity], result of:\n0.03131719 = score(doc=4355,freq=1.0), product of:\n0.12083437 = queryWeight, product of:\n2.7165241 = boost\n4.7391906 = idf(docFreq=1047, maxDocs=44083)\n0.009385832 = queryNorm\n0.2591745 = fieldWeight in 4355, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.7391906 = idf(docFreq=1047, maxDocs=44083)\n0.0546875 = fieldNorm(doc=4355)\n0.25167418 = weight(abstract_txt:laufend in 4355) [ClassicSimilarity], result of:\n0.25167418 = score(doc=4355,freq=1.0), product of:\n0.55495554 = queryWeight, product of:\n7.130058 = boost\n8.292632 = idf(docFreq=29, maxDocs=44083)\n0.009385832 = queryNorm\n0.4535033 = fieldWeight in 4355, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.292632 = idf(docFreq=29, maxDocs=44083)\n0.0546875 = fieldNorm(doc=4355)\n0.35 = coord(7/20)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5917488,"math_prob":0.9968878,"size":10771,"snap":"2023-40-2023-50","text_gpt3_token_len":4145,"char_repetition_ratio":0.23024055,"word_repetition_ratio":0.43054488,"special_character_ratio":0.51898617,"punctuation_ratio":0.28174764,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99948853,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T09:02:18Z\",\"WARC-Record-ID\":\"<urn:uuid:e681344e-6f10-43ae-ab8b-908330810ea9>\",\"Content-Length\":\"20471\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df7f2ab9-9895-46c8-9dff-a2788d2922cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:ccc75dac-724a-4958-ae0d-780ac9b0362f>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"https://ixtrieve.fh-koeln.de/birds/litie/document/19029\",\"WARC-Payload-Digest\":\"sha1:PQRGQGS62S3CUSRE4HIXIEV4E3TWIU5T\",\"WARC-Block-Digest\":\"sha1:BOWUQ5P3MVLEEJZCHMVL22PUDAIUYQ6G\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100286.10_warc_CC-MAIN-20231201084429-20231201114429-00446.warc.gz\"}"}
http://www.merlot.org/merlot/materials.htm?category=2574&community=3022
[ "# MERLOT Materials\n\n#### Filter by\n\n1-24 of 77 results for: MERLOT Materials\n\n#### Symmetry and Tilings\n\nFantastic site that gives examples of various tilings. In addition, has a Kali link which you can use to design your own... see more\n\n#### The Math Forum - Internet Mathematics... The Math Forum - Internet Mathematics Library\n\nA huge list of resources in various topics of mathematics and math education.\n\n#### The Regression Line\n\nThis site is part of the NCTM's Student i-Math Investigations website. It introduces the regression line and incorporates... see more\n\n#### Diffusion of Toxic Materials in a... Diffusion of Toxic Materials in a Landfill\n\nThis is a subsite associated with the parent site called IDEA (Internet Differential Equation Activities). The activity... see more\n\n#### IDEA: Internet Differential Equations... IDEA: Internet Differential Equations Activities\n\nQuoted from the site: \"IDEA is an interdisciplinary effort to provide students and teachers around the world with... see more\n\n#### Integer Optimization and the Network... Integer Optimization and the Network Models\n\nIt covers integer and network optimization with numerical examples and applications\n\n#### Math Facts Fun\n\nThis is a PowerPoint presentation to be utilized for primary math Kiosk. It utilizes Internet links to pre-test... see more\n\n#### Finite Math for Windows\n\nFinite Math for Windows is a software package that enables students to easily solve problems and/or check their work in... see more\n\n#### Applied Management Science\n\nDecision-Making is central to human activity. Thus, we are all decision-makers. However, a \"good\" decision making starts... see more\n\n#### Linear Graphing Utility\n\nThis is a simple Flash based application that can be used to help you present graphing concepts while using the computer.... see more\n\n#### Coastline (Mathematics, Fractals)\n\nIncludes teacher generated activities. A section of the coast line is randomly given an angle (two lines), resulting in a... see more\n\n#### Finding Equivalent Fractions\n\nThis is a goal directed lesson plan which guides students in finding equivalent fractions using dot paper rectangles and... see more\n\n#### Fractals in Science (Mathematics,... Fractals in Science (Mathematics, Chemistry)\n\nbook Fractals in Science: Directory of applets. The Center for Polymer Studies has developed a number of hands-on... see more\n\n#### From Linear to Nonlinear... From Linear to Nonlinear Optimization: The Missing Chapter\n\nIt presents a solution algorithm for a large class of problems with linear constraints and a continuous objective... see more\n\n#### IMAGE ESTIMATION BY EXAMPLE: ... IMAGE ESTIMATION BY EXAMPLE: Geophysical Soundings Image Construction (GEE)\n\nThis is a free, online textbook. According to the author, 'We make discoveries about reality by examining the discrepancy... see more\n\n#### James Peirce WTF Project\n\nA study measuring the connection between biological scenarios and mathematical models.\n\n#### Linear Regression Using R: An... Linear Regression Using R: An Introduction to Data Modeling\n\nLinear Regression Using R: An Introduction to Data Modelingpresents one of the fundamental data modeling techniques in an... see more\n\n#### Math 113B: Intro to Mathematical... Math 113B: Intro to Mathematical Modeling in Biology\n\nThis course is intended for both mathematics and biology undergrads with a basic mathematics background, and it consists... see more\n\n#### Mathematical Programming Glossary\n\nThis contains terms specific to mathematical programming, and some terms from other disciplines, notably economics,... see more\n\n#### Minkowski Physlet\n\nA geometric view of the Lorentz transformation.\n\n#### Principles of Calculus Modeling: An... Principles of Calculus Modeling: An Interactive Approach\n\nQuoted from the site: This Web site contains a page for each section of the book Principles of Calculus Modeling, an... see more\n\n#### Sensitivity Analysis\n\nThis site presents a collection of recent developments on sensitivity analysis. It includes sensitivity analysis in... see more\n\n#### Solution to the Two-Person Zero-Sum... Solution to the Two-Person Zero-Sum Games\n\nThis JavaScript provides the optimal solution to the two-person zero-sum games with up to five strategies for each... see more" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8891095,"math_prob":0.7572111,"size":2583,"snap":"2023-14-2023-23","text_gpt3_token_len":512,"char_repetition_ratio":0.14889492,"word_repetition_ratio":0.02238806,"special_character_ratio":0.200542,"punctuation_ratio":0.1680162,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97194076,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T19:10:38Z\",\"WARC-Record-ID\":\"<urn:uuid:cebed4e3-1c77-4ff0-bd95-8b5934344a9e>\",\"Content-Length\":\"313482\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65a2f55d-26e0-4aea-ab7e-587b729fff12>\",\"WARC-Concurrent-To\":\"<urn:uuid:643add5a-c2de-4036-8996-d867d71b2d29>\",\"WARC-IP-Address\":\"54.71.97.148\",\"WARC-Target-URI\":\"http://www.merlot.org/merlot/materials.htm?category=2574&community=3022\",\"WARC-Payload-Digest\":\"sha1:GRJX7DFUJBMRKD2X4OAZ6SKDVHUUR4ZF\",\"WARC-Block-Digest\":\"sha1:SG4DNZZFMR2AVHOLIUNWG5JQVNQLBV5J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648000.54_warc_CC-MAIN-20230601175345-20230601205345-00214.warc.gz\"}"}
https://www.brighthubeducation.com/homework-math-help/32850-overview-of-the-pythagorean-theorem/
[ "# Math Help: Pythagoras Theorem and Theorem Proofs\n\nPage content\n\nAround 2530 years ago, Pythagoras first created the Pythagorean Theorem. A simple Pythagorean Theorem proof is making a pyramid with a perfect square or rectangular base.\n\n## The Theorem\n\nThe square of the hypotenuse of a right angled triangle is equal to the sum of squares of other two sides.\n\nSee the attached snap of the triangle below (Click on it to enlarge):\n\nSo, as per Pythagoras theorem we can write that:\n\n(BC)2 = (AB)2 + (AC)2\n\nIf we consider the side AB as a, AC as c and BC as b then we can write the Pythagorean Theorem as\n\nb2=a2+ c2\n\nNow you are in a position to calculate the length of any one side of a right angled triangle, if the lengths of the other two sides are given.\n\n## Practice Problems\n\nSolve the following examples. Assume c is the hypotenuse of a right angled triangle and a, b are the other two sides of the same triangle:\n\na = 3; b = 4; c=?\n\na=5; c= 13; b=?\n\na=8; b=15; c=?\n\na= 9; c=41; b=?\n\n## Pythagorean Theorem Proof", null, "Pythagoras' Theorem has more than 300 proofs. The simplest proof of the theorem is based on the similar triangles concept:\n\n• Take the triangle ABC with AB=a, AC=c and BC=b.\n• Drop a line from A to D which is perpendicular to BC.\n• Triangle ACD is similar to the triangle ABC as:\n\nAngle ADC = Angle CAB = 90 degree\n\nAngle ACD = Angle ACB\n\nSide AC is common for both the triangle\n\nSo, we can write from similar triangle principle:\n\nc / b = DC / c\n\nc2 = b X DC …………….eqn.1\n\n• Again, triangle ABD and triangle ABC are similar because:\n\nAngle ADB = Angle CAB = 90 degree\n\nAngle ABD = Angle ABC\n\nSide AB is common for both triangles.\n\nSo, we can write:\n\na / b = BD / a\n\na2 = b X BD …………….eqn.2\n\n• From eqn.1 and eqn.2 we can write:\n\nc2 +a2 = (b X DC) + (b X BD)\n\n= b X (DC + BD)\n\n= b X BC(as, BC consists of DC and BD)\n\n= b X b(as, we already assume BC=b)\n\n= b2\n\n## Real Life Examples\n\n• Making a perfect rectangular basketball and volleyball court.\n• Measuring the height of ramp.\n• Calculating distance between two points if co-ordinates of the points are given.\n\n## Pythagoras Triples\n\nPythagoras triples are sets of three integer numbers which follows Pythagoras' Theorem. For example, take 3, 4, 5.\n\nRemember the Pythagoras theorem (b2=a2+ c2).\n\nNow, if you take a=3, c=4 then from the theorem b will be equal to 5. There are many such sets of integers like this: 5, 12,13; 17, 24, 25; 9, 40, 41 etc.\n\nPeople have been using these triples even before human beings learned to write." ]
[ null, "https://img.bhs4.com/42/6/426fa0eb77ba63da215617329928f2057eef0acc_large.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.882632,"math_prob":0.99768025,"size":2444,"snap":"2021-43-2021-49","text_gpt3_token_len":716,"char_repetition_ratio":0.1532787,"word_repetition_ratio":0.025263159,"special_character_ratio":0.28682488,"punctuation_ratio":0.13308688,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998665,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T17:48:14Z\",\"WARC-Record-ID\":\"<urn:uuid:7ff2c271-d009-48cb-8cdf-8aa0c278e2ca>\",\"Content-Length\":\"52627\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3084bb7f-f2f7-48e1-bc29-606eccfae093>\",\"WARC-Concurrent-To\":\"<urn:uuid:3894abd2-b6ad-42ed-9484-43c9e99de35c>\",\"WARC-IP-Address\":\"104.21.62.244\",\"WARC-Target-URI\":\"https://www.brighthubeducation.com/homework-math-help/32850-overview-of-the-pythagorean-theorem/\",\"WARC-Payload-Digest\":\"sha1:6O2WVEUS4RVGXHY4SO4KQCOTLE52Z2QY\",\"WARC-Block-Digest\":\"sha1:JLKSKRFDSWBA3PCCVFMSY5B7QYR6L6EO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587719.64_warc_CC-MAIN-20211025154225-20211025184225-00057.warc.gz\"}"}
https://www.ams.org/journals/tran/1981-268-01/S0002-9947-1981-0628445-8/home.html
[ "", null, "", null, "", null, "ISSN 1088-6850(online) ISSN 0002-9947(print)\n\nThe stable geometric dimension of vector bundles over real projective spaces\n\nAuthors: Donald M. Davis, Sam Gitler and Mark Mahowald\nJournal: Trans. Amer. Math. Soc. 268 (1981), 39-61\nMSC: Primary 55N15; Secondary 55R25\nDOI: https://doi.org/10.1090/S0002-9947-1981-0628445-8\nCorrection: Trans. Amer. Math. Soc. 280 (1983), 841-843.\nMathSciNet review: 628445\nFull-text PDF Free Access\n\nAbstract: An elementary argument shows that the geometric dimension of any vector bundle of order ${2^e}$ over $R{P^n}$ depends only on $e$ and the residue of $n \\bmod 8$ for $n$ sufficiently large. In this paper we calculate this geometric dimension, which is approximately $2e$. The nonlifting results are easily obtained using the spectrum $bJ$. The lifting results require $bo$-resolutions. Half of the paper is devoted to proving Mahowald’s theorem that beginning with the second stage $bo$-resolutions act almost like $K({Z_2})$-resolutions.\n\n[Enhancements On Off] (What's this?)\n\nRetrieve articles in Transactions of the American Mathematical Society with MSC: 55N15, 55R25\n\nRetrieve articles in all journals with MSC: 55N15, 55R25\n\nKeywords: Geometric dimension of vector bundles, real projective space, obstruction theory, <IMG WIDTH=\"24\" HEIGHT=\"19\" ALIGN=\"BOTTOM\" BORDER=\"0\" SRC=\"images/img22.gif\" ALT=\"$bo$\">-resolutions" ]
[ null, "https://www.ams.org/images/remote-access-icon.png", null, "https://www.ams.org/publications/journals/images/journal.cover.tran.gif", null, "https://www.ams.org/publications/journals/images/open-access-green-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6328929,"math_prob":0.9576916,"size":4587,"snap":"2021-31-2021-39","text_gpt3_token_len":1482,"char_repetition_ratio":0.103207506,"word_repetition_ratio":0.02163833,"special_character_ratio":0.37431872,"punctuation_ratio":0.2812803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97001386,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T18:41:16Z\",\"WARC-Record-ID\":\"<urn:uuid:9e1fb27a-11c3-4133-abf2-9eda501ab651>\",\"Content-Length\":\"54632\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e33b75f6-3161-409d-a249-2939a7e9809f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0557d933-ad3e-40d4-acfd-39fc0a24a043>\",\"WARC-IP-Address\":\"130.44.204.100\",\"WARC-Target-URI\":\"https://www.ams.org/journals/tran/1981-268-01/S0002-9947-1981-0628445-8/home.html\",\"WARC-Payload-Digest\":\"sha1:BKGL6EQ2Z5HRVFGTAT7JQAWLDN55VHJD\",\"WARC-Block-Digest\":\"sha1:PBKWZZDAKJGX6XZ3IICZTSOYMRGKPFD6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154466.61_warc_CC-MAIN-20210803155731-20210803185731-00562.warc.gz\"}"}
https://www.bartleby.com/solution-answer/chapter-32-problem-31e-single-variable-calculus-early-transcendentals-volume-i-8th-edition/9781305270343/find-an-equation-of-the-tangent-line-to-the-given-curve-at-the-specified-point-yx21x2x110/1b6fe587-e4d5-11e8-9bb5-0ece094302b6
[ "", null, "", null, "", null, "Chapter 3.2, Problem 31E", null, "### Single Variable Calculus: Early Tr...\n\n8th Edition\nJames Stewart\nISBN: 9781305270343\n\n#### Solutions\n\nChapter\nSection", null, "### Single Variable Calculus: Early Tr...\n\n8th Edition\nJames Stewart\nISBN: 9781305270343\nTextbook Problem\n\n# Find an equation of the tangent line to the given curve at the specified point. y = x 2 − 1 x 2 + x + 1 ,   ( 1 , 0 )\n\nTo determine\n\nTo find: The equation of the tangent line to the curve at the point.\n\nExplanation\n\nGiven:\n\nThe curve is y=x21x2+x+1.\n\nThe point is (1,0).\n\nDerivative rules:\n\n(1) Quotient Rule: If f1(x) and f2(x) are both differentiable, then\n\nddx[f1(x)f2(x)]=f2(x)ddx[f1(x)]f1(x)ddx[f2(x)][f2x]2\n\n(2) Power Rule: ddx(xn)=nxn1\n\n(3) Sum Rule: ddx[f(x)+g(x)]=ddx(f(x))+ddx(g(x))\n\n(4) Difference Rule: ddx[f(x)g(x)]=ddx(f(x))ddx(g(x))\n\nFormula used:\n\nThe equation of the tangent line at (x1,y1) is, yy1=m(xx1) (1)\n\nwhere, m is the slope of the tangent line at (x1,y1) and m=dydx|x=x1.\n\nCalculation:\n\nThe derivative of y is dydx, which is obtained as follows,\n\ndydx=ddx(y)=ddx(x21x2+x+1)\n\nApply the quotient rule (1) and simplify the terms,\n\ndydx=(x2+x+1)ddt(x21)(x21)ddx(x2+x+1)(x2+x+1)2\n\nApply the derivative rule (3) and (4),\n\ndydx=(x2+x+1)(ddt(x2)\n\n### Still sussing out bartleby?\n\nCheck out a sample textbook solution.\n\nSee a sample solution\n\n#### The Solution to Your Study Problems\n\nBartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!\n\nGet Started\n\n#### Convert the expressions in Exercises 6584 to power form. 3x453x+43xx\n\nFinite Mathematics and Applied Calculus (MindTap Course List)\n\n#### Use long division to find.\n\nMathematical Applications for the Management, Life, and Social Sciences\n\n#### Evaluate 6 8 24 30\n\nStudy Guide for Stewart's Multivariable Calculus, 8th\n\n#### Solve each equation. x436=35x2\n\nCollege Algebra (MindTap Course List)", null, "" ]
[ null, "https://www.bartleby.com/static/search-icon-white.svg", null, "https://www.bartleby.com/static/close-grey.svg", null, "https://www.bartleby.com/static/solution-list.svg", null, "https://www.bartleby.com/isbn_cover_images/9781305270343/9781305270343_largeCoverImage.gif", null, "https://www.bartleby.com/isbn_cover_images/9781305270343/9781305270343_largeCoverImage.gif", null, "https://www.bartleby.com/static/logo.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6750164,"math_prob":0.9978224,"size":2291,"snap":"2019-43-2019-47","text_gpt3_token_len":508,"char_repetition_ratio":0.18233493,"word_repetition_ratio":0.1335505,"special_character_ratio":0.17503273,"punctuation_ratio":0.093023255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996424,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T20:49:58Z\",\"WARC-Record-ID\":\"<urn:uuid:022d834e-b308-4a89-a933-3c43d19f2be5>\",\"Content-Length\":\"966969\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c282a8c0-1e65-4b0a-a045-27a9509b1ff3>\",\"WARC-Concurrent-To\":\"<urn:uuid:19a1557a-890c-4dfa-848f-23b19d2c1960>\",\"WARC-IP-Address\":\"99.84.101.78\",\"WARC-Target-URI\":\"https://www.bartleby.com/solution-answer/chapter-32-problem-31e-single-variable-calculus-early-transcendentals-volume-i-8th-edition/9781305270343/find-an-equation-of-the-tangent-line-to-the-given-curve-at-the-specified-point-yx21x2x110/1b6fe587-e4d5-11e8-9bb5-0ece094302b6\",\"WARC-Payload-Digest\":\"sha1:FLI4IZKUHXS2C3KZF3QGAMFDC5QEIOUX\",\"WARC-Block-Digest\":\"sha1:NYKM7HTV55ZVA4AZLG4VILHVIRBLFM7L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671548.98_warc_CC-MAIN-20191122194802-20191122223802-00163.warc.gz\"}"}
https://www.complexnetworks.fr/inhomogeneous-hypergraphs/
[ "# Inhomogeneous Hypergraphs\n\nlie de Panafieu\n\nJeudi 02 juillet 2015 à 11h, salle 24-25/405\n\nWe introduce the inhomogeneous hypergraph model. Each edge can contain an arbitrary number of vertices, the vertices are colored, and each edge receives a weight which depends on the colors of the vertices it contains. This model provides a uniform setting to solve problems arising from various domains of computer science and mathematics. We will focus on applications to the enumeration of satisfied and satisfiable instances of Constraint Satisfaction Problems (CSP), and compute the limit probability for a random graph to be bipartit, the limit probability of satisfiability of systems of equations, the enumeration of properly k-colored graphs and investigate some graphs coming from social networks. We will present results on the asymptotics of inhomogeneous hypergraphs and their typical structure. Our main tool is analytic combinatorics." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8580421,"math_prob":0.89679796,"size":910,"snap":"2023-40-2023-50","text_gpt3_token_len":181,"char_repetition_ratio":0.105960265,"word_repetition_ratio":0.0,"special_character_ratio":0.18461539,"punctuation_ratio":0.078947365,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9716523,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T13:27:00Z\",\"WARC-Record-ID\":\"<urn:uuid:e5f6382d-09c9-4117-85b9-79d8f9d6a805>\",\"Content-Length\":\"75541\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:32faf19c-fa19-49ea-89c8-8b4cab23f3a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:019f0a78-6ef3-4738-b720-f9262df5e5b3>\",\"WARC-IP-Address\":\"213.186.33.16\",\"WARC-Target-URI\":\"https://www.complexnetworks.fr/inhomogeneous-hypergraphs/\",\"WARC-Payload-Digest\":\"sha1:6TUZ6I54DD3YTPKOVS2NME7GGWZT5NHH\",\"WARC-Block-Digest\":\"sha1:SMBVWCVFJOBZ3AVQCQRT72ALEZGSAIZV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510676.40_warc_CC-MAIN-20230930113949-20230930143949-00620.warc.gz\"}"}
https://www.teachoo.com/1284/457/Example-3---In-figure--AB-is-a-diameter-of-circle--CD/category/Examples/
[ "Examples\n\nChapter 9 Class 9 Circles\nSerial order wise", null, "", null, "", null, "Learn in your speed, with individual attention - Teachoo Maths 1-on-1 Class\n\n### Transcript\n\nExample 2 In figure, AB is a diameter of the circle, CD is a chord equal to the radius of the circle. AC and BD when extended intersect at a point E. Prove that ∠ AEB = 60°. Given: AB is diameter of circle Chord CD, where CD = Radius of circle To prove: ∠AEB = 60° Construction: Join OC , OD & BC Proof: In Δ OCD OC = OD = CD = Radius of circle Since all sides are equal, Δ OCD is equilateral triangle ∠ COD = 60° Now, For arc CD subtends ∠ COD at centre & ∠ CBD at point B ∴ ∠ COD = 2 ∠ CBD 60° = 2∠ CBD 2∠ CBD = 60° ∠ CBD = (60°)/2 = 30° Now, Since AB is a diameter So, ∠ ACB = 90° Since AE is a line ∠ ACB + ∠ ECB = 180° 90° + ∠ ECB = 180° ∠ ECB = 180° – 90° = 90° In Δ ECB ∠ CEB + ∠ ECB + ∠ CBE = 180° ∠ CEB + 90° + 30° = 180° ∠ CEB + 120° = 180° ∠ CEB = 180° – 120° ∠ CEB = 60°", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/4ac4bd3e-cb1c-4523-b723-37d46c9254dc/slide5.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/bfcf8f13-504b-4b73-a62b-da895d93d4d6/slide6.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/9497c5c4-8fe7-43bd-abbc-9b9e39f765e3/slide7.jpg", null, "https://www.teachoo.com/static/misc/Davneet_Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8531032,"math_prob":0.9998388,"size":822,"snap":"2023-40-2023-50","text_gpt3_token_len":321,"char_repetition_ratio":0.15036675,"word_repetition_ratio":0.029268293,"special_character_ratio":0.4245742,"punctuation_ratio":0.093023255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.993622,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,5,null,5,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T14:18:16Z\",\"WARC-Record-ID\":\"<urn:uuid:dd76069a-3d6a-4d7f-9917-714c18fd5582>\",\"Content-Length\":\"148203\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e7f1f187-7f1f-405d-9a7e-0ef68c7ee14c>\",\"WARC-Concurrent-To\":\"<urn:uuid:213e3dff-daba-42a3-ab33-596b48b8bf52>\",\"WARC-IP-Address\":\"23.22.5.68\",\"WARC-Target-URI\":\"https://www.teachoo.com/1284/457/Example-3---In-figure--AB-is-a-diameter-of-circle--CD/category/Examples/\",\"WARC-Payload-Digest\":\"sha1:YHAIJ6L4JJ7LDILVVX6ZFNIWFSIAXWDD\",\"WARC-Block-Digest\":\"sha1:YFPXD7Q75TCJKTS3H2H3G6ZXKGBTHGD7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100912.91_warc_CC-MAIN-20231209134916-20231209164916-00444.warc.gz\"}"}
https://au.mathworks.com/help/antenna/ug/field-analysis.html
[ "Main Content\n\n## Field Analysis\n\n### Radiation Pattern\n\nThe radiation pattern of an antenna is the spatial distribution of power. The pattern displays the directivity or gain of the antenna. the power pattern of an antenna plots the transmitted or received power for a given radius. The field pattern of an antenna plots the variation in the electric or magnetic field for a given radius. The radiation pattern provides details such as the maximum and minimum value of the field quantity and the range of angles over which data is plotted.\n\n```h = helix; h.Turns = 13; h.Radius = 0.025; pattern(h,2.1e9)```", null, "Use the `pattern` function to plot radiation pattern of any antenna in the Antenna Toolbox™. By default, the function plots the directivity of the antenna. You can also plot the electric field and power pattern by using Type name-value pair argument of the pattern function.\n\n#### Lobes\n\nEach radiation pattern of an antenna contains radiation lobes. The lobes are divided into major lobes (also called main lobes) and minor lobes. Side lobes and back lobes are variations of minor lobes.\n\n```h = helix; h.Turns = 13; h.Radius = 0.025; patternElevation(h,2.1e9)```", null, "• `Major or Main lobe`: Shows the direction of maximum radiation, or power, of the antenna.\n\n• `Minor lobe`: Shows the radiation in undesired directions of antenna. The fewer the number of minor lobes, the greater the efficiency of the antenna. Side lobes are minor lobes that lie next to the major lobe. Back lobes are minor lobes that lie opposite to the major lobe of antenna.\n\n• `Null`: Shows the direction of zero radiation intensity of the antenna. Nulls usually lie between the major and minor lobe or in between the minor lobes of the antennas.\n\n#### Field Regions\n\nFor an antenna engineer and an electromagnetic compatibility (EMC) engineer, it is important to understand the regions around the antenna.", null, "The region around an antenna is defined in many ways. The most used description is a 2- or 3-region model. The 2-region model uses the terms near field and the far field to identify specific dominant field mechanisms. The diagram is a representation of antenna fields and boundaries. The 3-field region splits the near field into a transition zone, where a weakly radiative mechanism is at work.\n\n`Near-Field Region`: The near-field region is divided into two transition zones: a reactive zone and radiating zone.\n\n• `Reactive Near-Field Region`: This region is closest to the antenna surface. The reactive field dominates this region. The reactive field is stored energy, or standing waves. The fields in this region change rapidly with distance from the antenna. The equation for outer boundary of this region is: $R<0.62\\sqrt{{D}^{3}/\\lambda }$ where R is the distance from the antenna, λ is the wavelength, and D is the largest dimension of the antenna. This equation holds true for most antennas. In a very short dipole, the outer boundary of this region is $\\lambda /2\\pi$ from the antenna surface.\n\n• `Radiating Near-Field Region`: This region is also called the Fresnel region and lies between the reactive near-field region and the far-field region. The existence of this region depends on the largest dimension of the antenna and the wavelength of operation. The radiating fields are dominant in this region. The equation for the inner boundary of the region is equation $R\\ge 0.62\\sqrt{{D}^{3}/\\lambda }$ and the outer boundary is $R<2{D}^{2}/\\lambda$. This holds true for most antennas. The field distribution depends on the distance from the antenna.\n\n`Far-field Region`: This region is also called Fraunhofer region. In this region, the field distribution does not depend on the distance from the antenna. The electric and magnetic fields in this region are orthogonal to each other. This region contains propagating waves. The equation for the inner boundary of the far-field is $R=2{D}^{2}/\\lambda$ and the equation for the outer boundary is infinity.\n\n#### Directivity and Gain\n\nDirectivity is the ability of an antenna to radiate power in a particular direction. It can be defined as ratio of maximum radiation intensity in the desired direction to the average radiation intensity in all other directions. The equation for directivity is:\n\n`$D=\\frac{4\\pi U\\left(\\theta ,\\varphi \\right)}{{P}_{rad}}$`\n\nwhere:\n\n• D is the directivity of the antenna\n\n• U is the radiation intensity of the antenna\n\n• Prad is the average radiated power of antenna in all other directions\n\nAntenna directivity is dimensionless and is calculated in decibels compared to the isotropic radiator (dBi).\n\nThe gain of an antenna depends on the directivity and efficiency of the antenna. It can be defined as the ratio of maximum radiation intensity in the desired direction to the total power input of the antenna. The equation for gain of an antenna is:\n\n`$G=\\frac{4\\pi U\\left(\\theta ,\\varphi \\right)}{{P}_{in}}$`\n\nwhere:\n\n• G is the gain of the antenna\n\n• U is the radiation intensity of the antenna\n\n• Pin is the total power input to the antenna\n\nIf the efficiency of the antenna in the desired direction is `100%`, then the total power input to the antenna is equal to the total power radiated by the antenna, that is, ${P}_{in}={P}_{rad}$. In this case, the antenna directivity is equal to the antenna gain.\n\n### Beamwidth\n\nAntenna beamwidth is the angular measure of the antenna pattern coverage. As seen in the figure, the main beam is a region around maximum radiation. This beam is also called the major lobe, or main lobe of the antenna.", null, "Half power beamwidth (HPBW) is the angular separation in which the magnitude of the radiation pattern decreases by `50%` (or `-3dB`) from the peak of the main beam\n\nUse the `beamwidth` function to calculate the beamwidth of any antenna in Antenna Toolbox.\n\n### E-Plane and H Plane\n\n`E-plane`: Plane containing the electric field vector and the direction of maximum radiation. Consider a dipole antenna that is vertical along the z-axis. Use the `patternElevation` function to plot the elevation plane pattern. The elevation plane pattern shown captures the E-plane behavior of the dipole antenna.\n\n```d = dipole; patternElevation(d,70e6)```", null, "`H-plane`: Plane containing the magnetic field vector and the direction of maximum radiation. Use the `patternAzimuth` function to plot the azimuth plane pattern of a dipole antenna. The azimuthal variation in pattern shown captures the H-plane behavior of the dipole antenna.\n\n```d = dipole; patternAzimuth(d,70e6)```", null, "Use `EHfields` to measure the electric and magnetic fields of the antenna. The function can be used to calculate both near and far fields.\n\n### Polarization\n\nPolarization is the orientation of the electric field, or `E-field`, of an antenna. Polarization is classified as elliptical, linear, or circular.\n\n`Elliptical polarization`: If the electric field remains constant along the length but traces an ellipse as it moves forward, the field is elliptically polarized. Linear and circular polarizations are special cases of elliptical polarization.\n\n`Linear polarization`: If the electric field vector at a point in space is directed along a straight line, the field is linearly polarized. A linearly polarized antenna radiates only one plane and this plane contains the direction of propagation of the radio waves. There are two types of linear polarization:\n\n• `Horizontal Polarization`: The electric field vector is parallel to the ground plane. To view the horizontal polarization pattern of an antenna, use the `pattern` function, with the 'Polarization' name-value pair argument set to 'H'. The plot shows the horizontal polarization pattern of a dipole antenna:\n\n```d = dipole; pattern(d,70e6,'Polarization','H')```", null, "USA television networks use horizontally polarized antennas for broadcasting.\n\n• `Vertical Polarization`: The electric field vector is perpendicular to the ground plane. To view the vertical polarization pattern of an antenna, use the `pattern` function, with the 'Polarization' name-value pair argument set to 'V'. Vertical polarization is used when a signal has to be radiated in all directions. The plot shows the vertical polarization pattern of a dipole antenna:\n\n```d = dipole; pattern(d,70e6,'Polarization','V')```", null, "An AM radio broadcast antenna or an automobile whip antenna are some examples of vertically polarized antennas.\n\n`Circular Polarization`: If the electric field remains constant along the straight line but traces circle as it moves forward, the field is circularly polarized. This wave radiates in both vertical and horizontal planes. Circular polarization is most often used in satellite communications. There are two types of circular polarization:\n\n• `Right-Hand Circularly Polarized (RHCP)`: The electric field vector is traced in the counterclockwise direction. To view the RHCP pattern of an antenna, use the `pattern` function, with the 'Polarization' name-value pair argument set to 'RHCP'. The plot shows RHCP pattern of helix antenna:\n\n``` h = helix; h.Turns = 13; h.Radius = 0.025; pattern(h,1.8e9,'Polarization','RHCP')```", null, "• `Left-Hand circularly polarized (LHCP)`: The electric field vector is traced in the clockwise direction. To view the LHCP pattern of an antenna, use the `pattern` function, with the 'Polarization' name-value pair argument set to 'LHCP'. The plot shows LHCP pattern of helix antenna:\n\n``` h = helix; h.Turns = 13; h.Radius = 0.025; pattern(h,1.8e9,'Polarization','LHCP')```", null, "For efficient communications, the antennas at the transmitting and receiving end must have same polarization.\n\n### Axial Ratio\n\nAxial ratio (AR) of an antenna in a given direction quantifies the ratio of orthogonal field components radiated in a circularly polarized wave. An axial ratio of infinity implies a linearly polarized wave. When the axial ratio is 1, the radiated wave has pure circular polarization. Values greater than 1 imply elliptically polarized waves.\n\nUse `axialRatio` to calculate the axial ratio for any antenna in the Antenna Toolbox.\n\n Balanis, C.A. Antenna Theory. Analysis and Design, 3rd Ed. New York: Wiley, 2005.\n\n Stutzman, Warren.L and Thiele, Gary A. Antenna Theory and Design, 3rd Ed. New York: Wiley, 2013.\n\n Capps, C. Near Field or Far Field, EDN, August 16, 2001, pp.95 - pp.102.\n\n## Support\n\n#### Hybrid Beamforming for Massive MIMO Phased Array Systems\n\nDownload the white paper" ]
[ null, "https://au.mathworks.com/help/antenna/ug/radiationpattern.png", null, "https://au.mathworks.com/help/antenna/ug/lobes.png", null, "https://au.mathworks.com/help/antenna/ug/fieldregions.png", null, "https://au.mathworks.com/help/antenna/ug/beamwidth.png", null, "https://au.mathworks.com/help/antenna/ug/eplane.png", null, "https://au.mathworks.com/help/antenna/ug/hplane.png", null, "https://au.mathworks.com/help/antenna/ug/horizontalpolarization.png", null, "https://au.mathworks.com/help/antenna/ug/verticalpolarization.png", null, "https://au.mathworks.com/help/antenna/ug/rhcppolarization.png", null, "https://au.mathworks.com/help/antenna/ug/lhcppolarization.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8487299,"math_prob":0.95553684,"size":9628,"snap":"2021-21-2021-25","text_gpt3_token_len":2201,"char_repetition_ratio":0.20407315,"word_repetition_ratio":0.15184945,"special_character_ratio":0.20461155,"punctuation_ratio":0.12458655,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99069554,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T11:17:18Z\",\"WARC-Record-ID\":\"<urn:uuid:8d9a5f86-dd21-436d-b0a4-3b5ff1b1110f>\",\"Content-Length\":\"82566\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3db25a00-1d4d-436d-8e51-6c0e2e2676a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:6dd0c391-38ea-43b3-b531-8547e5c76209>\",\"WARC-IP-Address\":\"23.55.200.52\",\"WARC-Target-URI\":\"https://au.mathworks.com/help/antenna/ug/field-analysis.html\",\"WARC-Payload-Digest\":\"sha1:S6ODH4KPVVQOV6DFUS5P3QCCKTX5IZDI\",\"WARC-Block-Digest\":\"sha1:MFMDHZQTMZ7K3WTSGMQGKEI4JCWOZU2M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488538041.86_warc_CC-MAIN-20210623103524-20210623133524-00411.warc.gz\"}"}
http://scholarpedia.org/article/Genetic_algorithms
[ "# Genetic algorithms\n\n John H. Holland (2012), Scholarpedia, 7(12):1482. doi:10.4249/scholarpedia.1482 revision #128222 [link to/cite this article]\nPost-publication activity\n\nCurator: John H. Holland\n\nGenetic algorithms are based on the classic view of a chromosome as a string of genes. R.A. Fisher used this view to found mathematical genetics, providing mathematical formula specifying the rate at which particular genes would spread through a population (Fisher, 1958). Key elements of Fisher’s formulation are:\n\n• a specified set of alternatives (alleles) for each gene, thereby specifying the allowable strings of genes (the possible chromosomes),\n• a generation-by-generation view of evolution where, at each stage, a population of individuals produces a set of offspring that constitutes the next generation,\n• a fitness function that assigns to each string of alleles the number of offspring the individual carrying that chromosome will contribute to the next generation, and\n• a set of genetic operators, particularly mutation in Fisher’s formulation, that modify the offspring of an individual so that the next generation differs from the current generation.\n\nNatural selection, in this formulation, can be thought of as a procedure for searching through the set of possible individuals, the search space, to find individuals of progressively higher fitness. Even when a natural population consists of a single species – say the current generation of humans – there is considerable variation within that population. These variants constitute samples of the search space.\n\n## Definition\n\nA genetic algorithm (GA) is a generalized, computer-executable version of Fisher’s formulation (Holland J, 1995). The generalizations consist of:\n\n• concern with interaction of genes on a chromosome, rather than assuming alleles act independently of each other, and\n• enlargement of the set of genetic operators to include other well-known genetic operators such as crossing-over (recombination) and inversion.\n\nUnder the first generalization, the fitness function becomes a complex nonlinear function that usually cannot be usefully approximated by summing up the effects of individual genes. The second generalization puts emphasis on genetic mechanisms, such as crossover, that operate regularly on chromosomes. Crossover takes place in every mating whereas mutation of a given gene typically occurs in less than 1 in a million individuals. Crossover is the central reason that mammals produce offspring exhibiting a mix of their parents’ characteristics – consider the human population for a vivid, familiar example – and crossover makes possible the artificial cross-breeding of selected animals and plants to produce superior varieties. Clearly crossover’s high frequency gives it an important role in evolution, and it has an essential role in the operation of GAs, as outlined below.\n\nCrossover in its simplest form (one-point crossover) is easily defined: Two chromosomes are lined up, a point along the chromosome is randomly selected, then the pieces to the left of that point are exchanged between the chromosomes, producing a pair of offspring chromosomes.\n\nThis simple version of crossover is a good approximation to what actually takes places in mating organisms. Under one-point crossover, alleles that are close together on the chromosome are likely to be passed as a unit to one of the offspring, while alleles that are far apart are likely to be separated by crossover with one allele appearing in one offspring and the other allele appearing in the other offspring. For example, given a chromosome with 1001 genes, there is only one chance in a thousand that an adjacent pair of alleles will be separated in the offspring, while alleles at opposite ends of the string will always be separated. In standard genetic terminology, this phenomenon is called linkage. By observing the frequency of separation of allele-determined characteristics in successive generations, linkage can be determined experimentally. Indeed, assuming one-point crossover made gene sequencing possible long before we had any knowledge of DNA – in 1944, using experimentally determined linkage, the Carnegie Institution of Washington published a large volume recording the order of genes in the fruit fly (Lindsley & Grell, 1944).\n\nThe genetic algorithm, following Fisher’s formulation, uses the differing fitness of variants in the current generation to propagate improvements to the next generation, but the GA places strong emphasis on the variants produced by crossover. The basic GA subroutine, which produces the next generation from the current one, executes the following steps (where, for simplicity, it is assumed that each individual is specified by a single chromosome):\n\n1. Start with a population of $$N$$ individual strings of alleles (perhaps generated at random).\n2. Select two individuals at random from the current population, biasing selection toward individuals with higher fitness.\n3. Use crossover (with occasional use of mutation) to produce two new individuals which are assigned to the next generation.\n4. Repeat steps (1) and (2) $$N/2$$ times to produce a new generation of N individuals.\n5. Return to step (1) to produce the next generation.\n\nOf course, there are many ways to modify these steps, but most characteristics of a GA are already exhibited by this basic program.\n\n## Crossover\n\nCrossover introduces a considerable complication in the study of successive generations: Highly effective individuals in the parent generation (say, an “Einstein”), will not reappear in the next generation, because only a subset of the parent’s alleles is passed on to any given offspring. This raises an important question: What is passed from generation to generation if the parents’ specific chromosomes are never passed on? An answer to this question requires a prediction of the generation-by-generation spread of clusters of alleles, requiring a substantial generalization of Fisher’s fundamental theorem. The schema theorem is one such generalization; it deals with arbitrary allele clusters call schemas. A schema is specified using the symbol * (‘don’t care’) to specify places along the chromosome not belonging to the cluster. For example, if there are two distinct alleles for each position, call them 1 and 0, then the cluster consisting of allele 1 at position 2, allele 0 at position 4, and allele 0 at position 5, is designated by the string *1*00**...*. Let N(s,t) be the number of chromosomes carrying schema s in generation t. The schema theorem specifies the (expected) number $$N(s,t+1)$$ of chromosomes carrying schema $$s$$ in the next generation. A simplified version has the following form$N(s,t+1) = u(s,t)[1-e]N(s,t)$\n\nwhere $$u(s,t)$$ is the average fitness of the chromosomes carrying schema $$s$$ at time $$t$$ (the observed average fitness), and $$e$$ is the overall probability (usually quite small) that the cluster s will be destroyed (or created) by mutation or crossover. (Note that $$e$$ does become large if the alleles in the cluster are spread over a considerable part of the chromosome). This formula for $$N(s,t+1)$$ can be restated in terms of probabilities (proportions) $$P(s,t)\\ ,$$ a more typical form for mathematical genetics, by noting that $$P(s,t) = N(s,t)/N(t)\\ ,$$ were $$N(t)$$ is the size of the population at time $$t\\ .$$\n\nThe schema theorem shows that every cluster of alleles present in the population increases or decreases its proportion in the population at a rate determined by the observed average fitness of its carriers. In particular, schemas consistently associated with strings of above-average fitness spread rapidly through the population. As a result, crossover regularly tries out new combinations of these above-average schemas; they become building blocks for further attempts at solution. Though a GA only directly processes $$N(t)$$ strings in a generation, it effectively processes the much larger number of schemata carried in the population. For example, the number of schemas on a single string of length 40 is $$2 * exp(40) \\sim 16,000\\ ,$$ so a GA processing a few dozen strings of length 40 effectively processes thousands of schemas. For problems in which schemas capture regularities in the search space, this is a tremendous speedup.\n\n## Uses\n\nGenetic algorithms are routinely used to find good solutions for problems that do not yield to standard techniques such as gradient ascent (“hill-climbing”) or additive approximations (“the whole equals the sum of the parts”) (Mitchell, 2009). Some typical problems on which the GA has been used are control of pipelines, jet engine design, scheduling, protein folding, machine learning, modeling language acquisition and evolution, and modeling complex adaptive systems (such as markets and ecosystems). To use the GA, the search space must be represented as strings over some fixed alphabet, much as biological chromosomes are represented by strings over 4 nucleotides. The strings can represent anything from biological organisms, or rules for processing signals, to agents in a complex adaptive system. The GA is initialized with a population of these strings, which may be simply selected at random from the search space, or the initial population may be “salted” with strings picked out using prior knowledge of the problem. The GA then processes the strings, and successive generations uncover schemas of above-average observed fitness. When above-average, well-linked schemas are regularly associated with improvements, the GA rapidly exploits those improvements." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8900647,"math_prob":0.9563307,"size":9966,"snap":"2019-35-2019-39","text_gpt3_token_len":1991,"char_repetition_ratio":0.14073479,"word_repetition_ratio":0.0,"special_character_ratio":0.20720449,"punctuation_ratio":0.113895215,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9737902,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T10:43:57Z\",\"WARC-Record-ID\":\"<urn:uuid:23dbf01f-2190-4d18-b1e6-0621c7df0548>\",\"Content-Length\":\"38676\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4fbd86ac-3cda-47bf-8cf6-9bb9dbbef04b>\",\"WARC-Concurrent-To\":\"<urn:uuid:8491af37-a1bc-40df-b660-fe01baa9bb5b>\",\"WARC-IP-Address\":\"173.255.237.117\",\"WARC-Target-URI\":\"http://scholarpedia.org/article/Genetic_algorithms\",\"WARC-Payload-Digest\":\"sha1:2X2WGKACKORGYPQ6TIUDTYTOFXF25FUY\",\"WARC-Block-Digest\":\"sha1:4CXO7BEDOJZYRKDY4A6HZZFNEGVOW57J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573070.36_warc_CC-MAIN-20190917101137-20190917123137-00026.warc.gz\"}"}
https://wiki.fysik.dtu.dk/ase/tutorials/acn_equil/acn_equil.html
[ "# Equilibrating an MD box of acetonitrile¶\n\nIn this tutorial we see how to perform a thermal equilibration of an MD box of classical acetonitrile molecules using the Langevin module and the implementation of an acetonitrile force field in ASE.\n\nThe acetonitrile force field implemented in ASE (`ase.calculators.acn`) is an interaction potential between three-site linear molecules, in which the atoms of the methyl group are treated as a single site centered on the methyl carbon, i.e. hydrogens are not considered explicitly. For this reason, while setting up a box of acetonitrile one has to assign the mass of a methyl to the outer carbon atom. The calculator requires the atomic sequence to be MeCN … MeCN or NCMeNCMe … NCMe, where Me represents the methyl site.\n\nAs for the TIPnP models, the acetonitrile potential works with rigid molecules. However, due to the linearity of the acetonitrile molecular model, we cannot fix the geometry by constraining all interatomic distances using `FixBondLengths`, as is done for TIPnP water. Instead, we must use the class `FixLinearTriatomic`\n\nThe MD procedure we use for the equilibration closely follows the one presented in the tutorial Equilibrating a TIPnP Water Box.\n\n```from ase import Atoms\nfrom ase.constraints import FixLinearTriatomic\nfrom ase.calculators.acn import (ACN, m_me,\nr_mec, r_cn)\nfrom ase.md import Langevin\nimport ase.units as units\nfrom ase.io import Trajectory\n\nimport numpy as np\n\npos = [[0, 0, -r_mec],\n[0, 0, 0],\n[0, 0, r_cn]]\natoms = Atoms('CCN', positions=pos)\natoms.rotate(30, 'x')\n\n# First C of each molecule needs to have the mass of a methyl group\nmasses = atoms.get_masses()\nmasses[::3] = m_me\natoms.set_masses(masses)\n\n# Determine side length of a box with the density of acetonitrile at 298 K\n# Density in g/Ang3 (https://pubs.acs.org/doi/10.1021/je00001a006)\nd = 0.776 / 1e24\nL = ((masses.sum() / units.mol) / d)**(1 / 3.)\n# Set up box of 27 acetonitrile molecules\natoms.set_cell((L, L, L))\natoms.center()\natoms = atoms.repeat((3, 3, 3))\natoms.set_pbc(True)\n\n# Set constraints for rigid triatomic molecules\nnm = 27\natoms.constraints = FixLinearTriatomic(\ntriples=[(3 * i, 3 * i + 1, 3 * i + 2)\nfor i in range(nm)])\n\ntag = 'acn_27mol_300K'\natoms.calc = ACN(rc=np.min(np.diag(atoms.cell)) / 2)\n\n# Create Langevin object\nmd = Langevin(atoms, 1 * units.fs,\ntemperature=300 * units.kB,\nfriction=0.01,\nlogfile=tag + '.log')\n\ntraj = Trajectory(tag + '.traj', 'w', atoms)\nmd.attach(traj.write, interval=1)\nmd.run(5000)\n\n# Repeat box and equilibrate further\natoms.set_constraint()\natoms = atoms.repeat((2, 2, 2))\nnm = 216\natoms.constraints = FixLinearTriatomic(\ntriples=[(3 * i, 3 * i + 1, 3 * i + 2)\nfor i in range(nm)])\n\ntag = 'acn_216mol_300K'\natoms.calc = ACN(rc=np.min(np.diag(atoms.cell)) / 2)\n\n# Create Langevin object\nmd = Langevin(atoms, 2 * units.fs,\ntemperature=300 * units.kB,\nfriction=0.01,\nlogfile=tag + '.log')\n\ntraj = Trajectory(tag + '.traj', 'w', atoms)\nmd.attach(traj.write, interval=1)\nmd.run(3000)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60286963,"math_prob":0.97816575,"size":2966,"snap":"2023-40-2023-50","text_gpt3_token_len":901,"char_repetition_ratio":0.11681297,"word_repetition_ratio":0.18344519,"special_character_ratio":0.29028994,"punctuation_ratio":0.18060201,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9906967,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T16:18:13Z\",\"WARC-Record-ID\":\"<urn:uuid:8f194198-3eb8-4f6a-a9f1-092feceef668>\",\"Content-Length\":\"21863\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d0990d88-d3ba-4ccc-99c8-3eee4532c500>\",\"WARC-Concurrent-To\":\"<urn:uuid:0df0ad52-9cec-4393-99ae-d1d42dfd9aa0>\",\"WARC-IP-Address\":\"130.225.86.27\",\"WARC-Target-URI\":\"https://wiki.fysik.dtu.dk/ase/tutorials/acn_equil/acn_equil.html\",\"WARC-Payload-Digest\":\"sha1:AVFCRMPQBT2I7ZUEUPDNDM2TRACUDDVU\",\"WARC-Block-Digest\":\"sha1:37PK6TYO4V62TW6ARVETVLUISSRENY6C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511386.54_warc_CC-MAIN-20231004152134-20231004182134-00257.warc.gz\"}"}
http://flipcat.us/induction-furnace-circuit-diagram
[ "# Induction Furnace Circuit Diagram", null, "Techcommentary\n\nInduction furnace circuit diagram. induction furnace circuit diagram, induction furnace circuit diagram pdf, induction furnace circuit diagram capacitor, induction furnace circuit diagram datasheet, induction heating circuit diagram pdf, induction heating circuit diagram, induction furnace schematic diagram, induction melting furnace circuit diagram, induction heating schematic diagram, homemade induction furnace circuit diagram\n\nHello bro, My name is Alfi. Welcome to my site, we have many collection of Induction furnace circuit diagram pictures that collected by Flipcat.us from arround the internet\n\nThe rights of these images remains to it's respective owner's, You can use these pictures for personal use only.\n\nRandom post" ]
[ null, "x-raw-image:/0547dd50d9ae0dc7c76580b8ca74c5a24149dffbe0f1689fdfb2abbdebea8b39", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7984792,"math_prob":0.9410117,"size":718,"snap":"2019-13-2019-22","text_gpt3_token_len":123,"char_repetition_ratio":0.29411766,"word_repetition_ratio":0.0,"special_character_ratio":0.15877438,"punctuation_ratio":0.14035088,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96957237,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-25T23:59:35Z\",\"WARC-Record-ID\":\"<urn:uuid:72e4791e-7cb9-420d-8d1f-ae70a89de440>\",\"Content-Length\":\"137356\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e5930392-41dd-49ed-9e73-4e4498083d1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:fbfa6c93-e3f4-4da7-86ab-ec0675608a85>\",\"WARC-IP-Address\":\"104.18.37.3\",\"WARC-Target-URI\":\"http://flipcat.us/induction-furnace-circuit-diagram\",\"WARC-Payload-Digest\":\"sha1:PEPTYONPLGYOZFXSYB3MCZ35L247KLIW\",\"WARC-Block-Digest\":\"sha1:XUDSCQYYINGAB6GDAHMYZVSL6T7BAFSN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232258453.85_warc_CC-MAIN-20190525224929-20190526010929-00307.warc.gz\"}"}
https://www.esaral.com/q/find-the-sum-of-the-gp-95439
[ "", null, "# Find the sum of the GP :\n\nQuestion:\n\nFind the sum of the GP :\n\n$1+\\sqrt{3}+3+3 \\sqrt{3}+\\ldots . .$ to 10 terms\n\nSolution:\n\nSum of a G.P. series is represented by the formula, $\\mathrm{S}_{\\mathrm{n}}=\\mathrm{a} \\frac{\\mathrm{r}^{\\mathrm{n}}-1}{\\mathrm{r}-1}$\n\nwhen r>1. ‘Sn’ represents the sum of the G.P. series upto nth terms, ‘a’ represents the first term, ‘r’ represents the common ratio and ‘n’ represents the number of terms.\n\nHere,\n\na = 1\n\n$r=($ ratio between the $n$ term and $n-1$ term $) \\sqrt{3} \\div 1=\\sqrt{3}=1.732$\n\nn = 10 terms\n\n$\\therefore \\mathrm{S}_{\\mathrm{n}}=1 \\cdot \\frac{\\sqrt{3}^{10}-1}{\\sqrt{3}-1}$\n\n$\\Rightarrow \\mathrm{S}_{\\mathrm{n}}=\\frac{1.732^{10}-1}{1.732-1}$\n\n$\\Rightarrow \\mathrm{S}_{\\mathrm{n}}=\\frac{242.929-1}{0.732}$\n\n$\\Rightarrow \\mathrm{S}_{\\mathrm{n}}=\\frac{241.929}{0.732}$\n\n$\\Rightarrow \\mathrm{S}_{\\mathrm{n}}=330.504$" ]
[ null, "https://www.facebook.com/tr", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51290303,"math_prob":0.99999654,"size":804,"snap":"2023-14-2023-23","text_gpt3_token_len":308,"char_repetition_ratio":0.22,"word_repetition_ratio":0.0,"special_character_ratio":0.44527364,"punctuation_ratio":0.115384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000058,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T16:07:22Z\",\"WARC-Record-ID\":\"<urn:uuid:08e2850f-e119-40da-be21-d3a723be1f6d>\",\"Content-Length\":\"25020\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:39990c10-bf33-4384-8ed4-ee3383e9a915>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ee72d5b-c9dd-474b-afb1-1d6fa4af1e61>\",\"WARC-IP-Address\":\"172.67.213.11\",\"WARC-Target-URI\":\"https://www.esaral.com/q/find-the-sum-of-the-gp-95439\",\"WARC-Payload-Digest\":\"sha1:S57Q5UOCHGCWV3ADG7MN7MPBJNVIGTV4\",\"WARC-Block-Digest\":\"sha1:NTVLWPE47G3VFQ3YK4EAAJ46C7T33T3C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652959.43_warc_CC-MAIN-20230606150510-20230606180510-00323.warc.gz\"}"}
https://www.physicsforums.com/threads/mean-temperature-of-winding-when-current-falls.869789/
[ "# Mean temperature of winding when current falls\n\n## Homework Statement", null, "## Homework Equations\n\nR2=R1(1+alpha(t2-t1))\n\n## The Attempt at a Solution\n\nR1=250/5=50ohms\nR2=250/3.91=63.94ohms\n\nR2=R1(1+alpha15degrees(t2-t1))\n63.94=50(1+1/254.5(t2-15))\nt2=\n\nNow I found this online but the answers provided still don't match, 84.25 being the closest.", null, "When I manipulate the equation I get a totally different answer:\n63.94=50(1+1/254.5(t2-15))\n63.94/50=(1+1/254.5(t2-15))\n1.2788/1+1/254.5=t2-15\n1.274=t2-15\nt2=1.274+15=16.274 degrees\n\nRelated Introductory Physics Homework Help News on Phys.org\ncnh1995\nHomework Helper\nGold Member\nwhere I am going wrong.\n63.94/50=(1+1/254.5(t2-15))\n1.2788/1+1/254.5=t2-15", null, "Is this from B.L. Theraja?\n\nNot sure I found it online\n\ncnh1995\nHomework Helper\nGold Member\nNot sure I found it online\nIt is from B.L. Theraja. The page looks familiar. The red line in the above post is your error.\n\nWhat have I done wrong though? Every way I enter it into my calculator yields an incorrect answer\n\ncnh1995\nHomework Helper\nGold Member\n63.94/50=(1+1/254.5(t2-15))\n1.2788/1+1/254.5=t2-15[/\\QUOTE]\nThis step is wrong. Check the sequence of operations.\n\nAh I got it :-) thank you" ]
[ null, "data:image/svg+xml;charset=utf-8,%3Csvg xmlns%3D'http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg' width='783' height='223' viewBox%3D'0 0 783 223'%2F%3E", null, "data:image/svg+xml;charset=utf-8,%3Csvg xmlns%3D'http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg' width='552' height='160' viewBox%3D'0 0 552 160'%2F%3E", null, "data:image/svg+xml;charset=utf-8,%3Csvg xmlns%3D'http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg' width='552' height='160' viewBox%3D'0 0 552 160'%2F%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7915065,"math_prob":0.8026715,"size":531,"snap":"2020-10-2020-16","text_gpt3_token_len":222,"char_repetition_ratio":0.12713473,"word_repetition_ratio":0.0,"special_character_ratio":0.49905837,"punctuation_ratio":0.13235295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976528,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-30T17:32:36Z\",\"WARC-Record-ID\":\"<urn:uuid:552c45a9-e890-471e-919a-4699dd7b3d6a>\",\"Content-Length\":\"90530\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0315608-0380-4930-bbd5-74ad37c9de50>\",\"WARC-Concurrent-To\":\"<urn:uuid:51fff89c-0dd5-4c63-81b7-43cf8d34833a>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/mean-temperature-of-winding-when-current-falls.869789/\",\"WARC-Payload-Digest\":\"sha1:THRWVLDALDWT7RWF62R32T3UISLRPJ5N\",\"WARC-Block-Digest\":\"sha1:TMPRAPJ4PZWGXNWYUVE4HYYTNR55FJOB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370497171.9_warc_CC-MAIN-20200330150913-20200330180913-00429.warc.gz\"}"}
https://www.gurobi.com/documentation/9.0/examples/modify_a_model.html
[ "# Modify a model\n\nFilter Content By\nVersion\nLanguages\n\n## Modify a model\n\nExamples: diet, feasopt, fixanddive, gc_pwl_func, lpmod, sensitivity, workforce3, workforce4, workforce5\n\nThis section considers model modification. Modification can take many forms, including adding constraints or variables, deleting constraints or variables, modifying constraint and variable attributes, changing constraint coefficients, etc. The Gurobi examples don't cover all possible modifications, but they cover the most common types.\n\ndiet\n\nThis example builds a linear model that solves the classic diet problem: to find the minimum cost diet that satisfies a set of daily nutritional requirements. Once the model has been formulated and solved, it adds an additional constraint to limit the number of servings of dairy products and solves the model again. Let's focus on the model modification.\n\nAdding constraints to a model that has already been solved is no different from adding constraints when constructing an initial model. In Python, we can introduce a limit of 6 dairy servings through the following constraint:\n\n m.addConstr(buy['milk'] + buy['ice cream'] <= 6, \"limit_dairy\")\n\nFor linear models, the previously computed solution can be used as an efficient warm start for the modified model. The Gurobi solver retains the previous solution, so the next optimize call automatically starts from the previous solution.\n\nlpmod\n\nChanging a variable bound is also straightforward. The lpmod example changes a single variable bound, then re-solves the model in two different ways. A variable bound can be changed by modifying the UB or LB attribute of the variable. In C:\n\n error = GRBsetdblattrelement(model, GRB_DBL_ATTR_UB, var, 0.0);\n\nIn Python:\n minVar.ub = 0\n\nThe model is re-solved simply by calling the optimize method again. For a continuous model, this starts the optimization from the previous solution. To illustrate the difference when solving the model from an initial, unsolved state, the lpmod example calls the reset function. In C:\n error = GRBreset(model, 0);\n\nIn C++, Java, and Python:\n m.reset(0)\n\nIn C#:\n m.Reset(0)\n\nWhen we call the optimize method after resetting the model, optimization starts from scratch. Although the difference in computation time is insignificant for this tiny example, a warm start can make a big difference for larger models.\n\nfixanddive\n\nThe fixanddive example provides another example of bound modification. In this case, we repeatedly modify a set of variable bounds, utilizing warm starts each time. In C, variables are fixed as follows:\n\n for (j = 0; j < nfix; ++j)\n{\nfixval = floor(fractional[j].X + 0.5);\nerror = GRBsetdblattrelement(model, \"LB\", fractional[j].index, fixval);\nif (error) goto QUIT;\nerror = GRBsetdblattrelement(model, \"UB\", fractional[j].index, fixval);\nif (error) goto QUIT;\n}\n\nIn Python, they are fixed as follows:\n for i in range(nfix):\nv = fractional[i]\nfixval = int(v.x + 0.5)\nv.lb = fixval\nv.ub = fixval\n\nAgain, the subsequent call to optimize starts from the previous solution.\n\nsensitivity\n\nThe sensitivity example computes the optimal objective value associated with fixing each binary variable to 0 or 1. It first solves the given model to optimality. It then constructs a multi-scenario model, where in each scenario a binary variable is fixed to the complement of the value it took in the optimal solution. The resulting multi-scenario model is solved, giving the objective degradation associated with forcing each binary variable off of its optimal value.\n\nfeasopt\n\nThe last modification example we consider is feasopt, which adds variables to existing constraints and also changes the optimization objective. Setting the objective to zero is straightforward: simply call setObjective with a zero argument:\n\n m.setObjective(0)\n\nAdding new variables is somewhat more complex. In the example, we want to add artificial variable(s) to each constraint in order to allow the constraint to be relaxed. We use two artificial variables for equality constraints and one for inequality constraints. The Python code for adding a single artificial variable to constraint c is:\n feasModel.addVar(obj=1.0, name=\"ArtP_\" + c.Constrname, column=Column(, [c]))\n\nWe use the column argument of the addVar method to specify the set of constraints in which the new variable participates, as well as the associated coefficients. In this example, the new variable only participates in the constraint to be relaxed. Default values are used here for all variables attributes except the objective and the variable name." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81175613,"math_prob":0.9537365,"size":4481,"snap":"2022-40-2023-06","text_gpt3_token_len":1022,"char_repetition_ratio":0.13669868,"word_repetition_ratio":0.015037594,"special_character_ratio":0.2041955,"punctuation_ratio":0.14805825,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9924621,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T21:00:14Z\",\"WARC-Record-ID\":\"<urn:uuid:eb0fae42-0997-45a1-a0bd-fa754a8729ef>\",\"Content-Length\":\"98847\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ad791bc-1515-4756-93ee-630af5beda43>\",\"WARC-Concurrent-To\":\"<urn:uuid:d570ec00-af3f-47ee-a5c4-7a5a1ed1894b>\",\"WARC-IP-Address\":\"54.148.70.159\",\"WARC-Target-URI\":\"https://www.gurobi.com/documentation/9.0/examples/modify_a_model.html\",\"WARC-Payload-Digest\":\"sha1:JXT56HHQKWVCXQP5XBQCXELJA6Q6KCDP\",\"WARC-Block-Digest\":\"sha1:WWQABHLW7AKD7NLEVJSF3JTNRQ5ITGZG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337668.62_warc_CC-MAIN-20221005203530-20221005233530-00057.warc.gz\"}"}
http://python.6.x6.nabble.com/a-more-precise-distance-algorithm-td5163342.html
[ "# a more precise distance algorithm", null, "Classic", null, "List", null, "Threaded", null, "18 messages", null, "Open this post in threaded view\n|\n\n## a more precise distance algorithm\n\n I read an interesting comment: \"\"\" The coolest thing I've ever discovered about Pythagorean's Theorem is an alternate way to calculate it. If you write a program that uses the distance form c = sqrt(a^2 + b^2) you will suffer from the lose of half of your available precision because the square root operation is last. A more accurate calculation is c = a * sqrt(1 + b^2 / a^2). If a is less than b, you should swap them and of course handle the special case of a = 0. \"\"\" Is this valid? Does it apply to python? Any other thoughts? :D My imagining: def distance(A, B):     \"\"\"     A & B are objects with x and y attributes     :return: the distance between A and B     \"\"\"     dx = B.x - A.x     dy = B.y - A.y     a = min(dx, dy)     b = max(dx, dy)     if a == 0:         return b     elif b == 0:         return a     else:         return a * sqrt(1 + (b / a)**2)\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n El 25/05/15 15:21, ravas escribi?: > I read an interesting comment: > \"\"\" > The coolest thing I've ever discovered about Pythagorean's Theorem is an alternate way to calculate it. If you write a program that uses the distance form c = sqrt(a^2 + b^2) you will suffer from the lose of half of your available precision because the square root operation is last. A more accurate calculation is c = a * sqrt(1 + b^2 / a^2). If a is less than b, you should swap them and of course handle the special case of a = 0. > \"\"\" > > Is this valid? Does it apply to python? > Any other thoughts? :D > > My imagining: > > def distance(A, B): >      \"\"\" >      A & B are objects with x and y attributes >      :return: the distance between A and B >      \"\"\" >      dx = B.x - A.x >      dy = B.y - A.y >      a = min(dx, dy) >      b = max(dx, dy) >      if a == 0: >          return b >      elif b == 0: >          return a >      else: >          return a * sqrt(1 + (b / a)**2) I don't know if precision lose fits here but the second way you gave to calculate c is just Math. Nothing extraordinary here. c = a * sqrt(1 + b^2 / a^2) c = sqrt(a^2(1 + b^2 / a^2)) applying the inverse function to introduce a inside the square root c = sqrt(a^2 + a^2*b^2/a^2) then just simplify c = sqrt(a^2 + b^2) -------------- next part -------------- An HTML attachment was scrubbed... URL:\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n In reply to this post by ravas On 05/25/2015 12:21 PM, ravas wrote: > I read an interesting comment: > \"\"\" > The coolest thing I've ever discovered about Pythagorean's Theorem is an alternate way to calculate it. If you write a program that uses the distance form c = sqrt(a^2 + b^2) you will suffer from the lose of half of your available precision because the square root operation is last. A more accurate calculation is c = a * sqrt(1 + b^2 / a^2). If a is less than b, you should swap them and of course handle the special case of a = 0. > \"\"\" > > Is this valid? > Does it apply to python? This is a statement about floating point numeric calculations on a computer,.  As such, it does apply to Python which uses the underlying hardware for floating point calculations. Validity is another matter.  Where did you find the quote? Gary Herron > Any other thoughts? :D > > My imagining: > > def distance(A, B): >      \"\"\" >      A & B are objects with x and y attributes >      :return: the distance between A and B >      \"\"\" >      dx = B.x - A.x >      dy = B.y - A.y >      a = min(dx, dy) >      b = max(dx, dy) >      if a == 0: >          return b >      elif b == 0: >          return a >      else: >          return a * sqrt(1 + (b / a)**2)\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n In reply to this post by ravas Am 25.05.15 um 21:21 schrieb ravas: > I read an interesting comment: > \"\"\" > The coolest thing I've ever discovered about Pythagorean's Theorem is an alternate way to calculate it. If you write a program that uses the distance form c = sqrt(a^2 + b^2) you will suffer from the lose of half of your available precision because the square root operation is last. A more accurate calculation is c = a * sqrt(1 + b^2 / a^2). If a is less than b, you should swap them and of course handle the special case of a = 0. > \"\"\" > > Is this valid? Yes. Valid for floating point math, which can overflow and lose precision. > Does it apply to python? Yes. Python uses floating point math by default > Any other thoughts? :D > > My imagining: > > def distance(A, B): Wrong. Just use the built-in function Math.hypot() - it should handle these cases and also overflow, infinity etc. in the best possible way. Apfelkiste:~ chris\\$ python Python 2.7.2 (default, Oct 11 2012, 20:14:37) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.  >>> import math  >>> math.hypot(3,4) 5.0         Christian\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n On Monday, May 25, 2015 at 1:27:24 PM UTC-7, Christian Gollwitzer wrote: > Wrong. Just use the built-in function Math.hypot() - it should handle > these cases and also overflow, infinity etc. in the best possible way. > > Apfelkiste:~ chris\\$ python > Python 2.7.2 (default, Oct 11 2012, 20:14:37) > [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin > Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >  >>> import math >  >>> math.hypot(3,4) > 5.0 > > Christian Thank you! :D I forgot about that one. I wonder how the sympy Point.distance method compares...\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n In reply to this post by ravas On Monday, May 25, 2015 at 1:27:43 PM UTC-7, Gary Herron wrote: > This is a statement about floating point numeric calculations on a > computer,.  As such, it does apply to Python which uses the underlying > hardware for floating point calculations. > > Validity is another matter.  Where did you find the quote? Thank you. You can find the quote in the 4th comment at the bottom of: http://betterexplained.com/articles/surprising-uses-of-the-pythagorean-theorem/\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n In reply to this post by ravas On Tue, 26 May 2015 05:21 am, ravas wrote: > I read an interesting comment: > \"\"\" > The coolest thing I've ever discovered about Pythagorean's Theorem is an > alternate way to calculate it. If you write a program that uses the > distance form c = sqrt(a^2 + b^2) you will suffer from the lose of half of > your available precision because the square root operation is last. A more > accurate calculation is c = a * sqrt(1 + b^2 / a^2). If a is less than b, > you should swap them and of course handle the special case of a = 0. \"\"\" Let's compare three methods. import math import random def naive(a, b):     return math.sqrt(a**2 + b**2) def alternate(a, b):     a, b = min(a, b), max(a, b)     if a == 0:  return b     if b == 0:  return a     return a * math.sqrt(1 + b**2 / a**2) counter = 0 print(\"Type Ctrl-C to exit\") while True:     counter += 1     a = random.uniform(0, 1000)     b = random.uniform(0, 1000)     d1 = naive(a, b)     d2 = alternate(a, b)     d3 = math.hypot(a, b)     if not (d1 == d2 == d3):         print(\"mismatch after %d trials\" % counter)         print(\"naive:\", d1)         print(\"alternate:\", d2)         print(\"hypot:\", d3)         break When I run that, I get: mismatch after 3 trials naive: 1075.6464259886257 alternate: 1075.6464259886257 hypot: 1075.6464259886254 A second run gives: mismatch after 3 trials naive: 767.3916150255787 alternate: 767.3916150255789 hypot: 767.3916150255787 which shows that: (1) It's not hard to find mismatches; (2) It's not obvious which of the three methods is more accurate. -- Steven\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n On Monday, May 25, 2015 at 8:11:25 PM UTC-7, Steven D'Aprano wrote: > Let's compare three methods. > ... > which shows that: > > (1) It's not hard to find mismatches; > (2) It's not obvious which of the three methods is more accurate. Thank you; that is very helpful! I'm curious: what about the sqrt() function being last is detrimental? >From a point of ignorance it seems like we are just producing errors sooner, and then multiplying them, with this alternative method.\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n In reply to this post by ravas On Mon, May 25, 2015 at 1:21 PM, ravas wrote: > I read an interesting comment: > \"\"\" > The coolest thing I've ever discovered about Pythagorean's Theorem is an alternate way to calculate it. If you write a program that uses the distance form c = sqrt(a^2 + b^2) you will suffer from the lose of half of your available precision because the square root operation is last. A more accurate calculation is c = a * sqrt(1 + b^2 / a^2). If a is less than b, you should swap them and of course handle the special case of a = 0. > \"\"\" > > Is this valid? Does it apply to python? > Any other thoughts? :D > > My imagining: > > def distance(A, B): >     \"\"\" >     A & B are objects with x and y attributes >     :return: the distance between A and B >     \"\"\" >     dx = B.x - A.x >     dy = B.y - A.y >     a = min(dx, dy) >     b = max(dx, dy) >     if a == 0: >         return b >     elif b == 0: >         return a This branch is incorrect because a could be negative. You don't need this anyway; the a == 0 branch is only there because of the division by a in the else branch. >     else: >         return a * sqrt(1 + (b / a)**2) Same issue; if a is negative then the result will have the wrong sign.\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n In reply to this post by ravas Oh ya... true >_< Thanks :D On Monday, May 25, 2015 at 9:43:47 PM UTC-7, Ian wrote: > > def distance(A, B): > >     \"\"\" > >     A & B are objects with x and y attributes > >     :return: the distance between A and B > >     \"\"\" > >     dx = B.x - A.x > >     dy = B.y - A.y > >     a = min(dx, dy) > >     b = max(dx, dy) > >     if a == 0: > >         return b > >     elif b == 0: > >         return a > > This branch is incorrect because a could be negative. > > You don't need this anyway; the a == 0 branch is only there because of > the division by a in the else branch. > > >     else: > >         return a * sqrt(1 + (b / a)**2) > > Same issue; if a is negative then the result will have the wrong sign.\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n In reply to this post by ravas On 05/25/2015 09:13 PM, ravas wrote: > On Monday, May 25, 2015 at 8:11:25 PM UTC-7, Steven D'Aprano wrote: >> Let's compare three methods. >> ... >> which shows that: >> >> (1) It's not hard to find mismatches; >> (2) It's not obvious which of the three methods is more accurate. > Thank you; that is very helpful! > > I'm curious: what about the sqrt() function being last is detrimental? >  From a point of ignorance it seems like we are just producing errors sooner, > and then multiplying them, with this alternative method. It's probably not the square root that's causing the inaccuracies. In many other cases, and probably here also, it's the summing of two numbers that have vastly different values that loses precision.  A demonstration:  >>> big = 100000000.0  >>> small = 0.000000001  >>> (big+small)-big # Should produce a value =small, but gives an exact zero instead. 0.0 The squaring of the two values in x*x+y*y just makes the addition even more error prone since the squares make large values even larger and small values even smaller. Gary Herron. -- Dr. Gary Herron Department of Computer Science DigiPen Institute of Technology (425) 895-4418\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n In reply to this post by Steven D'Aprano-8 Am 26.05.15 um 05:11 schrieb Steven D'Aprano: > mismatch after 3 trials > naive: 767.3916150255787 > alternate: 767.3916150255789 > hypot: 767.3916150255787 > > > which shows that: > > (1) It's not hard to find mismatches; > (2) It's not obvious which of the three methods is more accurate. The main problem is not necessarily precision. A square root is a very precise operation in floating point math, the relative precision *increases* by sqrt. The big problem is overflow. Take e.g. a=3*10^160, b=4*10^160, then the exact result is c=5*10^160. But:  >>> a=3e160  >>> b=4e160  >>> math.sqrt(a**2+b**2) Traceback (most recent call last):    File \"\", line 1, in OverflowError: (34, 'Result too large')  >>> math.hypot(a,b) 5e+160         Christian\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n In reply to this post by ravas On Monday, May 25, 2015 at 10:16:02 PM UTC-7, Gary Herron wrote: > It's probably not the square root that's causing the inaccuracies. In > many other cases, and probably here also, it's the summing of two > numbers that have vastly different values that loses precision.  A > demonstration: > >  >>> big = 100000000.0 >  >>> small = 0.000000001 >  >>> (big+small)-big # Should produce a value =small, but gives an exact > zero instead. > 0.0 > > The squaring of the two values in x*x+y*y just makes the addition even > more error prone since the squares make large values even larger and > small values even smaller. I appreciate your help.\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n In reply to this post by ravas On Mon, May 25, 2015, at 15:21, ravas wrote: > Is this valid? Does it apply to python? > Any other thoughts? :D The math.hypot function uses the C library's function which should deal with such concerns internally. There is a fallback version in case the C library does not have this function, in Python/pymath.c - which, incidentally, does use your algorithm.\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n On Tue, May 26, 2015, at 09:40, random832 at fastmail.us wrote: > On Mon, May 25, 2015, at 15:21, ravas wrote: > > Is this valid? Does it apply to python? > > Any other thoughts? :D > > The math.hypot function uses the C library's function which should deal > with such concerns internally. There is a fallback version in case the C > library does not have this function, in Python/pymath.c - which, > incidentally, does use your algorithm. Well, I should say, not _precisely_ your algorithm. The \"0 special case\" mentioned in the text you read was for both values being zero, not just one. The biggest flaw in your function, though, was the failure to take the absolute values of the differences. This defeats the point of swapping them (which I assume is to get the magnitudes in the order needed for best precision), and makes it possible for your function to return a negative value when the other is zero. Here's the equivalent python code for the hypot function in pymath.c, and for your distance function. from math import sqrt def hypot(x, y):     x = abs(x)     y = abs(y)     if x < y:         x, y = y, x     if x == 0:  # both are 0 due to the swap         return 0.0     else:         return x*sqrt(1.0 + (y/x)**2) def distance(A, B):     return hypot(A.x-B.x, A.y-B.y) What I wonder is if there's a best way to do it for three dimensions. I mean, you could simply do hypot(hypot(dx, dy), dz), but should you choose the largest, smallest, or middle value to be the odd one out?\nOpen this post in threaded view\n|\n\n## a more precise distance algorithm\n\n A minor point is that if you just need to compare distances you don't need to compute the hypotenuse, its square will do so no subtractions etc etc. -- Robin Becker" ]
[ null, "http://python.6.x6.nabble.com/images/view-classic.gif", null, "http://python.6.x6.nabble.com/images/view-list.gif", null, "http://python.6.x6.nabble.com/images/view-threaded.gif", null, "http://python.6.x6.nabble.com/images/pin.png", null, "http://python.6.x6.nabble.com/images/gear.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8130568,"math_prob":0.9548337,"size":17663,"snap":"2019-43-2019-47","text_gpt3_token_len":5412,"char_repetition_ratio":0.097797155,"word_repetition_ratio":0.5056285,"special_character_ratio":0.38826928,"punctuation_ratio":0.17811911,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99448967,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T04:09:44Z\",\"WARC-Record-ID\":\"<urn:uuid:7ce986ee-ad1a-4f54-92a0-049b2e6665e9>\",\"Content-Length\":\"132248\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d994ef68-7cee-47f3-9f0c-7e6c156a7f85>\",\"WARC-Concurrent-To\":\"<urn:uuid:e39dbf4b-83b8-467a-8333-69bb1f15d5b8>\",\"WARC-IP-Address\":\"162.255.23.37\",\"WARC-Target-URI\":\"http://python.6.x6.nabble.com/a-more-precise-distance-algorithm-td5163342.html\",\"WARC-Payload-Digest\":\"sha1:B5JEURRYZYFSLJIDVBLEN7B365GE343G\",\"WARC-Block-Digest\":\"sha1:2UDFMYECGFAVEELMKOXVJCFE6QYL2223\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669967.80_warc_CC-MAIN-20191119015704-20191119043704-00418.warc.gz\"}"}
https://analytics4all.org/2016/05/11/python-linear-regression/
[ "# Python: Linear Regression\n\nRegression is still one of the most widely used predictive methods. If you are unfamiliar with Linear Regression, check out my: Linear Regression using Excel lesson. It will explain the more of the math behind what we are doing here. This lesson is focused more on how to code it in Python.\n\nWe will be working with the following data set: Linear Regression Example File 1\n\n## Import the data", null, "What we have is a data set representing years worked at a company and salary.\n\n## Let’s plot it\n\nBefore we go any further, let’s plot the data.\n\nLooking at the plot, it looks like there is a possible correlation.", null, "## Linear Regression using scipy\n\nscipy library contains some easy to use maths and science tools. In this case, we are importing stats from scipy\n\nthe method stats.linregress() produces the following outputs: slope, y-intercept, r-value, p-value, and standard error.", null, "I set slope to m and y-intercept to b: so we match the linear formula y = mx+b\n\nUsing the results of our regression, we  can create an easy function to predict a salary. In the example below, I want to predict the salary of a person who has been working there 10 years.", null, "Our p value is nice and low. This means our variables do have an effect on each other", null, "Our standard error is 250, but this can be misleading based on the size of the values in your regression. A better measurement is r squared. We find that by squaring our r output", null, "R squared runs from 0 (bad) to 1 (good). Our R squared is .44. So our regression is not that great. I prefer to keep a r squared value at least .6 or above.\n\n## Plot the Regression Line\n\nWe can use our pred() function to find the y-coords needed to plot our regression line.\n\nPassing pred() a x value of 0 I get our bottom value. I pass pred() a x value of 35 to get our top value.\n\nI then redo my scatter plot just like above. Then I plot my line using plt.plot().", null, "If you enjoyed this lesson, click LIKE below, or even better, leave me a COMMENT.\n\nFollow this link for more Python content: Python" ]
[ null, "https://i0.wp.com/analytics4all.org/wp-content/uploads/2016/05/linreg.jpg", null, "https://i0.wp.com/analytics4all.org/wp-content/uploads/2016/05/linreg1.jpg", null, "https://i0.wp.com/analytics4all.org/wp-content/uploads/2016/05/linreg2.jpg", null, "https://i0.wp.com/analytics4all.org/wp-content/uploads/2016/05/linreg3.jpg", null, "https://i0.wp.com/analytics4all.org/wp-content/uploads/2016/05/linreg4.jpg", null, "https://i0.wp.com/analytics4all.org/wp-content/uploads/2016/05/linreg5.jpg", null, "https://i0.wp.com/analytics4all.org/wp-content/uploads/2016/05/linreg6.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86298865,"math_prob":0.904154,"size":2128,"snap":"2022-40-2023-06","text_gpt3_token_len":486,"char_repetition_ratio":0.12664783,"word_repetition_ratio":0.0052083335,"special_character_ratio":0.22697368,"punctuation_ratio":0.114349775,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999203,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T16:00:01Z\",\"WARC-Record-ID\":\"<urn:uuid:7c4067ea-df32-49c6-8422-f3fb44b2933c>\",\"Content-Length\":\"133188\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:10c50346-6a63-4ff1-9ec8-06dc0a573a92>\",\"WARC-Concurrent-To\":\"<urn:uuid:85a943f2-99c8-4918-89eb-056b1a2472d4>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://analytics4all.org/2016/05/11/python-linear-regression/\",\"WARC-Payload-Digest\":\"sha1:OSWXQASGTE26TSYBRET65BZKTWIISF4D\",\"WARC-Block-Digest\":\"sha1:YSONAE2ULSUWVAII57SRCTTIODCUD7BG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337421.33_warc_CC-MAIN-20221003133425-20221003163425-00444.warc.gz\"}"}
https://www.fxsolver.com/browse/formulas/Conical+pendulum
[ "'\n\n# Conical pendulum\n\n## Description\n\nA conical pendulum is a weight (or bob) fixed on the end of a string (or rod) suspended from a pivot. Its construction is similar to an ordinary pendulum; however, instead of rocking back and forth, the bob of a conical pendulum moves at a constant speed in a circle with the string (or rod) tracing out a cone. The time required for the bob of the conical pendulum to travel one revolution depends on the horizontal circle’s radius and the angle between the leg and the hypotenuse.", null, "Related formulas\n\n## Variables\n\n t Time for one revolution (sec) π pi r Horizontal circle's radius (m) g Standard gravity θ The angle between the leg and the hypotenuse (radians)" ]
[ null, "https://www.fxsolver.com/media/wiki//250px-Conical_pendulum.svg.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.816718,"math_prob":0.87301826,"size":693,"snap":"2021-31-2021-39","text_gpt3_token_len":166,"char_repetition_ratio":0.13352685,"word_repetition_ratio":0.034782607,"special_character_ratio":0.21500722,"punctuation_ratio":0.04761905,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96624625,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T04:37:16Z\",\"WARC-Record-ID\":\"<urn:uuid:fcebb8a0-e6d2-4d39-ad97-77ea45e42b73>\",\"Content-Length\":\"20032\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8fd77ac7-5c9d-48e6-9f76-23595f53e5f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:36a5f1e1-7525-4f15-a328-d9f776c2f780>\",\"WARC-IP-Address\":\"178.254.54.75\",\"WARC-Target-URI\":\"https://www.fxsolver.com/browse/formulas/Conical+pendulum\",\"WARC-Payload-Digest\":\"sha1:C2XVODKUHROV67NVLLLSKC4MG6HRYSWK\",\"WARC-Block-Digest\":\"sha1:ZIMDCPKPC3PGKYC6VJJEQY3GZJHF3XQE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057329.74_warc_CC-MAIN-20210922041825-20210922071825-00699.warc.gz\"}"}
https://sensevietnam.com/qa/what-type-of-number-is-the-square-root-of-19.html
[ "", null, "# What Type Of Number Is The Square Root Of 19?\n\n## What is the perfect square of 15?\n\n225Perfect Square:Positive IntegerInteger Squared=Perfect Squares List1212 ^2 =1441313 ^2 =1691414 ^2 =1961515 ^2 =22592 more rows.\n\n## What are the factors for 19?\n\nThe only factors of 19 are 1 and 19, so 19 is a prime number. That is, 19 is divisible by only 1 and 19, so it is prime.\n\n## What two numbers make 19?\n\nThe number 19 is prime, divisible only by itself and 1. You could “make” 19 by adding 1 and 18, 2 and 17 and so forth.\n\n## Is square root of 19 a rational number?\n\nAnswer and Explanation: The square root of 19 is not a rational number.\n\n## Is square root of 15 a rational number?\n\nExplanation: 15=3×5 has no square factors, so √15 cannot be simplified. It is not expressible as a rational number. It is an irrational number a little less than 4 .\n\n## What type of number is √ 16?\n\nSquare root of 16 is +4 or -4. Since -4 is not a natural number, the square root can be described as an integer.\n\n## How do you know if a number is irrational?\n\nAn irrational number is a number that is NOT rational. It cannot be expressed as a fraction with integer values in the numerator and denominator. When an irrational number is expressed in decimal form, it goes on forever without repeating.\n\n## What is 19 squared?\n\n361The square of 19 is 19*19 = 361.\n\n## Is the square root of 16 a rational number?\n\nThe square root of 16 is a rational number. The square root of 16 is 4, an integer.\n\n## Is 9 square root a rational number?\n\nBut √4 = 2 (rational), and √9 = 3 (rational) … … so not all roots are irrational.\n\n## Why is the number 19 100 a rational number?\n\nA. It is the quotient of 100 divided by 19. It is the quotient of 9 divided by 10. …\n\n## Is 17 rational or irrational?\n\nIn mathematics rational means “ratio like.” So a rational number is one that can be written as the ratio of two integers. For example 3=3/1, −17, and 2/3 are rational numbers. Most real numbers (points on the number-line) are irrational (not rational).\n\n## How many natural numbers are there between 18 square and 19 square?\n\n36 natural numbers36 natural numbers lie between 18 squared and 19 squared.\n\n## Is 19 irrational or rational?\n\nThe number 19 is a rational number if 19 can be expressed as a ratio, as in RATIOnal. A quotient is the result you get when you divide one number by another number. For 19 to be a rational number, the quotient of two integers must equal 19.\n\n## What are the two square roots of 16?\n\nTable of Squares and Square RootsNUMBERSQUARESQUARE ROOT131693.606141963.742152253.873162564.00096 more rows\n\n## What kind of number is square root of 15?\n\nThe square root of 15 is a rational number if 15 is a perfect square. It is an irrational number if it is not a perfect square. Since 15 is not a perfect square, it is an irrational number. This means that the answer to “the square root of 15?” will have an infinite number of decimals.\n\n## Is 19 a lucky number?\n\nIs 19 a lucky number in numerology? Yes, number 19 is a lucky number in numerology. This number reduces to 1 which represents excellence. People influenced by number 19 are likely to be successful, famous, and intelligent.\n\n## What kind of number is 19?\n\n19 (number)← 18 19 20 →CardinalnineteenOrdinal19th (nineteenth)Numeral systemnonadecimalFactorizationprime10 more rows\n\n## Is 16 a real number?\n\nRational numbers include natural numbers, whole numbers, and integers. … Sixteen is natural, whole, and an integer. Since it can also be written as the ratio 16:1 or the fraction 16/1, it is also a rational number. It’s easy to look at a fraction and say it’s a rational number, but math has its rules." ]
[ null, "https://mc.yandex.ru/watch/69504541", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9029625,"math_prob":0.9977224,"size":4313,"snap":"2021-43-2021-49","text_gpt3_token_len":1153,"char_repetition_ratio":0.26317012,"word_repetition_ratio":0.17300613,"special_character_ratio":0.30164617,"punctuation_ratio":0.12554112,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994431,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T15:59:28Z\",\"WARC-Record-ID\":\"<urn:uuid:755d9528-d3fd-454e-9fe8-d7bcdd3d2643>\",\"Content-Length\":\"36308\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99f00eea-809d-4426-bbfa-2983a6639710>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9ed32f2-6b62-43f3-9c43-937689192051>\",\"WARC-IP-Address\":\"45.130.40.27\",\"WARC-Target-URI\":\"https://sensevietnam.com/qa/what-type-of-number-is-the-square-root-of-19.html\",\"WARC-Payload-Digest\":\"sha1:IFPXXPYFULE4XHW7AYCLEAS3W5FTNNE7\",\"WARC-Block-Digest\":\"sha1:R2Z5DKYZHXZTCCIVOI3LKWENFRLADE7Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585424.97_warc_CC-MAIN-20211021133500-20211021163500-00528.warc.gz\"}"}
https://math.stackexchange.com/questions/1666727/does-twos-complement-arithmetic-produce-a-field-isomorphic-to-gf2n
[ "# Does two's complement arithmetic produce a field isomorphic to $GF(2^{n}$)?\n\nFrom what I understand, we have these two isomorphisms:\n\n• $(TC, +)$ is isomorphic to the cyclic group $\\mathbb{Z}/2^n\\mathbb{Z}$.\n• $(TC, *)$ is isomorphic to the multiplicative group of polynomials.\n\nIf this is correct, can we conclude that two's complement arithmetic produces a finite field isomorphic to $GF(2^{n}$)?\n\nIf not, what algebraic structure, if any, does two's complement representation and arithmetic produce? Because there just seems to be something there.\n\n• I'm afraid I don't know what TC-arithmetic really is, but it sure looks like the answer is No. The finite field $GF(2^n)$ is an $n$-dimensional vector space over $\\Bbb{Z}_2$. Its additive group is isomorphic to bitwise XOR of $n$-bit masks. Its multiplication is more complicated. It is a lot like polynomials with coefficients in $\\Bbb{Z}_2$, but modulo a chosen irreducible polynomial. Without an irreducible polynomial you won't get a group. The arithmetic of $GF(2^n)$ most emphatically has nothing to do with arithmetic modulo $2^n$, $x+x=0$ for all $x\\in GF(2^n)$ (bitwise XOR!!). – Jyrki Lahtonen Feb 22 '16 at 7:55\n• It just seems like two's complement representation and arithmetic produces some kind of algebraic structure. Intuition suggests it must be something close to a finite field. If not, I was wondering what it was then. – Leo Heinsaar Feb 22 '16 at 10:10\n• Leo, if your multiplication as polynomials means, among other things, that $0x0002\\cdot 0x0002=0x0004$, $0x0002\\cdot0x0004=0x0008$, $0x0003\\cdot0x0003=0x0005$ et cetera, then it becomes a problem that this multiplication does not mesh at all well with modular integer addition. – Jyrki Lahtonen Feb 22 '16 at 10:17\n• Thanks Jyrki. The answer below also makes a good point that TC arithmetic can't be a field itself because it has zero divisors. I just wanted to understand in more detail how close it gets. – Leo Heinsaar Feb 22 '16 at 10:29\n\n## 1 Answer\n\nAs Jyrki Lahtonen has already said, the additive group is already a problem, since for $GF(2^n)$ it is $(\\mathbb{Z}/2\\mathbb{Z})^n$, not $\\mathbb{Z}/2^n \\mathbb{Z}$.\n\nIn fact, it cannot be a field, since it has zero-divisors: $2^k \\cdot 2^{n-k} = 0$.\n\nTwo's complement arithmetic is isomorphic to the quotient in your question, $\\mathbb{Z}/2^n \\mathbb{Z}$, but not only as groups, but as rings, that is, with multiplication too. This structure is rarely a field, through (only if $n=1$).\n\n• Is it not a field only because it has zero divisors, or is there something else as well? (Because if it's just zero divisors, we can forgive it and give it an honorary field title for its enormous contribution to mankind. :-) – Leo Heinsaar Feb 22 '16 at 10:24\n• @LeoHeinsaar Well, it is associative, commutative and unital. Posessing zero devisors seems to be the first obstacle to being a field, all other stuff follows from that (lacking multiplicative inverses, having proper nontrivial ideals, etc). – lisyarus Feb 22 '16 at 10:31" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91049147,"math_prob":0.99012935,"size":470,"snap":"2019-35-2019-39","text_gpt3_token_len":117,"char_repetition_ratio":0.12017167,"word_repetition_ratio":0.0,"special_character_ratio":0.24468085,"punctuation_ratio":0.15116279,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99897146,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-26T05:24:52Z\",\"WARC-Record-ID\":\"<urn:uuid:77385bac-9a8b-4305-a492-19a77aefa86b>\",\"Content-Length\":\"142309\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b27d7986-976b-4c8c-912b-29029f60cb69>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca8be8ee-1aa4-40cc-91e5-4d667e212f66>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1666727/does-twos-complement-arithmetic-produce-a-field-isomorphic-to-gf2n\",\"WARC-Payload-Digest\":\"sha1:ECFWYZGEIOCHR3XJEWDB3ISBJLPYIFZH\",\"WARC-Block-Digest\":\"sha1:BBWNNECYCLAU2TGARCHNU2ZX7IH2WPKG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027330968.54_warc_CC-MAIN-20190826042816-20190826064816-00136.warc.gz\"}"}
http://devlib.symbian.slions.net/s3/GUID-ACCCB148-DAF9-59EC-B585-8EF632B9BF04.html
[ "# SQL Joins\n\nThis guide explains how to use CROSS JOIN phrases to override the optimizer's ordering of tables.\n\n### Introduction\n\nSQLite uses the “CROSS JOIN” phrase as a means to override the table reordering decisions of the query optimizer. The CROSS JOIN connector is rarely needed and should probably never be used prior to the final performance tuning phase of application development. Even then, SQLite usually gets the order of tables in a join right without any extra help. But on those rare occasions when SQLite gets it wrong, the CROSS JOIN connector is an invaluable way of tweaking the optimizer to do what you want.\n\nIntended audience:\n\nThis document is intended to be used by Symbian platform licensees and third party application developers.\n\n### Use CROSS JOIN to Force a Particular Join Ordering\n\nThe SQLite query optimizer will attempt to reorder the tables in a join in order to find the most efficient way to evaluate the join. The optimizer usually does this job well, but occasionally it will make a bad choice. When that happens, it might be necessary to override the optimizer's choice by explicitly specifying the order of tables in the SELECT statement.\n\nTo illustrate the problem, consider the following schema:\n\n```CREATE TABLE node(\nid INTEGER PRIMARY KEY,\nname TEXT\n);\n\nCREATE INDEX node_idx ON node(name);\n\nCREATE TABLE edge(\norig INTEGER REFERENCES node,\ndest INTEGER REFERENCES node,\nPRIMARY KEY(orig, dest)\n);\n\nCREATE INDEX edge_idx ON edge(dest,orig);\n```\n\nThis schema defines a directed graph with the ability to store a name on each node of the graph. Similar designs (though usually more complicated) arise frequently in application development. Now consider a three-way join against the above schema:\n\n```SELECT e.*\nFROM edge AS e,\nnode AS n1,\nnode AS n2\nWHERE n1.name = 'alice'\nAND n2.name = 'bob'\nAND e.orig = n1.id\nAND e.dest = n2.id;\n```\n\nThis query asks for information about all edges that go from nodes labelled “alice” over to nodes labelled “bob”.\n\nThere are many ways that the optimizer might choose to implement this query, but they all boil down to two basic designs. The first option looks for edges between all pairs of nodes. The following pseudocode illustrates:\n\n```foreach n1 where n1.name='alice' do:\nforeach n2 where n2.name='bob' do:\nforeach e where e.orig=n1.id and e.dest=n2.id do:\nreturn e.*\nend\nend\nend\n```\n\nThe second design is to loop over all 'alice' nodes and follow edges off of those nodes looking for nodes named 'bob'. (The roles of 'alice' and 'bob' might be reversed here without changing the fundamental character or the algorithm):\n\n```foreach n1 where n1.name='alice' do:\nforeach e where e.orig=n1.id do:\nforeach n2 where n2.id=e.dest and n2.name='bob' do:\nreturn e.*\nend\nend\nend\n```\n\nThe first algorithm above corresponds to a join order of n1-n2-e. The second algorithm corresponds to a join order of n1-e-n2.\n\nThe question the optimizer has to answer is which of these two algorithms is likely to give the fastest result, and it turns out that the answer depends on the nature of the data stored in the database.\n\nLet the number of alice nodes be M and the number of bob nodes be N. Consider two scenarios: In the first scenario, M and N are both 2 but there are thousands of edges on each node. In this case, the first algorithm is preferred. With the first algorithm, the inner loop checks for the existence of an edge between a pair of nodes and outputs the result if found. But because there are only 2 alice and bob nodes each, the inner loop only has to run 4 times and the query is very quick.\n\nThe second algorithm would take much longer here. The outer loop of the second algorithm only executes twice, but because there are a large number of edges leaving each 'alice' node, the middle loop has to iterate many thousands of times. So in the first scenario, we prefer to use the first algorithm.\n\nNow consider the case where M and N are both 3500. But suppose each of these nodes is connected by only one or two edges. In this case, the second algorithm is preferred.\n\nWith the second algorithm, the outer loop still has to run 3500 times, but the middle loop only runs once or twice for each outer loop and the inner loop will only run once for each middle loop, if at all. So the total number of iterations of the inner loop is around 7000.\n\nThe first algorithm, on the other hand, has to run both its outer loop and its middle loop 3500 times each, resulting in 12 million iterations of the middle loop. Thus in the second scenario, second algorithm is nearly 2000 times faster than the first.\n\nIn this particular example, if you run ANALYZE on your database to collect statistics on the tables, the optimizer will be able to figure out the best algorithm to use. But if you do not want to run ANALYZE or if you do not want to waste database space storing the SQLITE_STAT1 statistics table that ANALYZE generates, you can manually override the decision of the optimizer by specifying a particular order for tables in a join. You do this by substituting the keyword phrase “CROSS JOIN” in place of commas in the FROM clause.\n\nThe “CROSS JOIN” phrase forces the table to the left to be used before the table to the right. For example, to force the first algorithm, write the query this way:\n\n```SELECT *\nFROM node AS n1 CROSS JOIN\nnode AS n2 CROSS JOIN\nedge AS e\nWHERE n1.name = 'alice'\nAND n2.name = 'bob'\nAND e.orig = n1.id\nAND e.dest = n2.id;\n```\n\nAnd to force the second algorithm, write the query like this:\n\n```SELECT *\nFROM node AS n1 CROSS JOIN\nedge AS e CROSS JOIN\nnode AS n2\nWHERE n1.name = 'alice'\nAND n2.name = 'bob'\nAND e.orig = n1.id\nAND e.dest = n2.id;\n```\n\nThe CROSS JOIN keyword phrase is perfectly valid SQL syntax according to the SQL standard, but it is syntax that is rarely if ever used in real-world SQL statements. Because it is so rarely used otherwise, SQLite has appropriated the phrase as a means to override the table reordering decisions of the query optimizer.\n\nThe CROSS JOIN connector is rarely needed and should probably never be used prior to the final performance tuning phase of application development. Even then, SQLite usually gets the order of tables in a join right without any extra help. But on those rare occasions when SQLite gets it wrong, the CROSS JOIN connector is an invaluable way of tweaking the optimizer to do what you want." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84443414,"math_prob":0.9155696,"size":6212,"snap":"2019-13-2019-22","text_gpt3_token_len":1364,"char_repetition_ratio":0.12145618,"word_repetition_ratio":0.21356554,"special_character_ratio":0.22198969,"punctuation_ratio":0.10697306,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98541445,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-22T07:50:08Z\",\"WARC-Record-ID\":\"<urn:uuid:da0fca4b-c81c-4bd0-8be2-b9ede382d546>\",\"Content-Length\":\"11612\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d65b2f2-5e02-40b8-b8e4-6dc725b576e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:e73d1a05-8f89-42c1-90e6-ebc6c10d1dea>\",\"WARC-IP-Address\":\"208.97.151.98\",\"WARC-Target-URI\":\"http://devlib.symbian.slions.net/s3/GUID-ACCCB148-DAF9-59EC-B585-8EF632B9BF04.html\",\"WARC-Payload-Digest\":\"sha1:WO5VYBAFTYIRWRAL5HIMPTEEAHSADHYH\",\"WARC-Block-Digest\":\"sha1:L3RFPSBNZOJISH6DQSH3INFCPDGTTL5A\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256764.75_warc_CC-MAIN-20190522063112-20190522085112-00150.warc.gz\"}"}
https://answers.everydaycalculation.com/compare-fractions/24-4-and-8-35
[ "Solutions by everydaycalculation.com\n\n## Compare 24/4 and 8/35\n\n1st number: 6 0/4, 2nd number: 8/35\n\n24/4 is greater than 8/35\n\n#### Steps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 4 and 35 is 140\n2. For the 1st fraction, since 4 × 35 = 140,\n24/4 = 24 × 35/4 × 35 = 840/140\n3. Likewise, for the 2nd fraction, since 35 × 4 = 140,\n8/35 = 8 × 4/35 × 4 = 32/140\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 840/140 > 32/140 or 24/4 > 8/35\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.846661,"math_prob":0.9943264,"size":417,"snap":"2020-24-2020-29","text_gpt3_token_len":170,"char_repetition_ratio":0.2590799,"word_repetition_ratio":0.0,"special_character_ratio":0.46282974,"punctuation_ratio":0.067307696,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9903836,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-26T13:36:41Z\",\"WARC-Record-ID\":\"<urn:uuid:84580d07-b5f9-4af9-bd64-d6d155db3f7a>\",\"Content-Length\":\"7936\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:066cb813-275c-4900-b233-dc157e2a4194>\",\"WARC-Concurrent-To\":\"<urn:uuid:f15456dd-0074-43f5-9b5b-cf0731334652>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/24-4-and-8-35\",\"WARC-Payload-Digest\":\"sha1:UJWQZKFWKWJIOWFFO342HFYQUQXJ47I4\",\"WARC-Block-Digest\":\"sha1:YC3HOWAI3FBAGGHOEQPB3KARWMWQY2SY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347390758.21_warc_CC-MAIN-20200526112939-20200526142939-00310.warc.gz\"}"}
https://stackoverflow.com/questions/30321423/multiple-sql-queries-in-jsp
[ "# multiple sql queries in jsp [closed]\n\nThis may be a bad practice, but I'm novice in creating jsp. I want to perform multiple updates - using the if statement. There are about 6 queries that I want to use, but I can't get the code to work. Is it possible to update more than one sql in jsp?\n\nHere's my code:\n\n``````<html> //dbupdatetam.jsp\n<body>\n<%@ page import=\"java.util.* , javax.sql.* , java.sql.*\" %>\n<%\njava.sql.Connection con = null;\njava.sql.Statement s = null;\njava.sql.ResultSet rs = null;\njava.sql.PreparedStatement pst = null;\n\nString var1 = request.getParameter(\"num1\");\nString var2 = request.getParameter(\"num2\");\n\n//int var3 = Integer.parseInt(var1);\n\nString url= \"jdbc:sqlserver://HOST;databaseName=dbname\";\nString id= \"user\";\nString pass = \"pwd\";\n\ntry{\n\nClass.forName(\"com.microsoft.sqlserver.jdbc.SQLServerDriver\");\ncon = java.sql.DriverManager.getConnection(url, id, pass);\n}catch(ClassNotFoundException cnfex){\ncnfex.printStackTrace();\n}\nString sql = \"select Genre from tablename where id= '\" + var1 + \"'\";\nString sqlFic = \"update tablename set StatusID='0', Status= 'Borrowed (\" + var2 + \")' where id= '\" + var1 + \"'\";\n\ntry{\t//try start\ns = con.createStatement();\n//pst=con.prepareStatement(sql);\nrs = s.executeQuery(sql);\n%>\n<%\nString retnValue = null;\nif ( rs.next() ){ //while start\nretnValue = rs.getString(1);\n}\n%>\n<p>String value is <%=retnValue%></p>\n<% if ( retnValue != null) { //ifstart\n%>\n<p>String value is <%=retnValue%></p>\n<%\ntry{ //try1start\ns = con.createStatement();\npst = con.prepareStatement(sqlFic);\nint count = s.executeUpdate(sqlFic);\n%>\n<p>The update is successful.<%=count%> record updated successfully.</p>\n<%\n} //try1end\ncatch(Exception e){e.printStackTrace();}\nfinally{ //finallystart\nif(rs!=null) rs.close();\nif(s!=null) s.close();\nif(con!=null) con.close();\n}//finallyend\n%>\n<% } %>\n<%\n//} //whileend\n%>\n<%\n} //tryend\ncatch(Exception e){e.printStackTrace();}\nfinally { //finallystart\nif(rs!=null) rs.close();\nif(s!=null) s.close();\nif(con!=null) con.close();\n} //finallyend\n%>\n\n</body>\n</html>``````\n\nThis is where I get the var1 and var2:\n\n`````` <FORM ACTION=\"tamupdate.jsp\" METHOD=\"POST\">\n<INPUT TYPE=\"number\" NAME=\"num1\">\n<BR>\n<b>Please Enter your <b>correct</b> Employee ID as this is where the book you request will be sent.</b>\n<br><BR>\nEnter the ID of the book you'd like to check the availability:\n<INPUT TYPE=\"number\" NAME=\"num2\">\n<BR><br>\n<INPUT TYPE=\"SUBMIT\" value=\"Check Availability\">\n</FORM><br><br>\n\n<jsp:include page=\"dbupdatetam.jsp\">\n<jsp:param name=\"num1\" value=\"bookid\"/>\n<jsp:param name=\"num2\" value=\"empid\"/>\n</jsp:include>``````\n\nThis doesn't work! I am using tomcat localhost and running the jsp via Internet explorer (http://localhost:8080/filename.jsp). I get a blank screen while running this. I suspect there's an issue with the update query. Can anyone review this and tell me where I went wrong?\n\nIf you are not connecting to multiple database then there is no need to create two Connection Objects because you can achieve this requirement creating a single object. I found you are using executeQuery() method while updating the record. Use executeUpdate() method of Connection object while you are performing any DML operations. It returns an integer value.\n\nBelow is the working jsp code.\n\n``````<%@ page import=\"java.util.* , javax.sql.* , java.sql.*\" %>\n<%\nConnection con = null;\njava.sql.Statement s = null;\njava.sql.ResultSet rs = null;\n\nint var3 = Integer.parseInt(request.getParameter(\"num1\"));\nint var4 = Integer.parseInt(request.getParameter(\"num2\"));\n\nString url= \"***\";\nString id= \"***\";\nString pass = \"***\";\n\ntry{\n\nClass.forName(\"com.mysql.jdbc.Driver\");\ncon = java.sql.DriverManager.getConnection(url, id, pass);\n\n}catch(ClassNotFoundException cnfex){\ncnfex.printStackTrace();\n\n}\nString sql = \"select name from demo where id=\"+var3;\nString sql1 = \"update demo set name='XYZ' where id=\"+var4;\n\ntry{ //try start\n\ns = con.createStatement();\nrs = s.executeQuery(sql);\n%>\n<%\nString retnValue = null;\nif( rs.next() ){ //while start\nretnValue = rs.getString(1);\n}\n%>\n<p>String value is <%=retnValue%></p>\n<% if ( retnValue != null) { //ifstart\n%>\n<%\ntry{ //try1start\n\nint count = s.executeUpdate(sql1);\n%>\n<p>The update is successful.<%=count%> record updated successfully.</p>\n<%\n\n} //try1end\ncatch(Exception e){e.printStackTrace();}\nfinally{ //finallystart\nif(rs!=null) rs.close();\nif(s!=null) s.close();\nif(con!=null) con.close();\n}//finallyend\n%>\n<% } %>\n<%\n//} //whileend\n%>\n<%\n} //tryend\ncatch(Exception e){e.printStackTrace();}\nfinally { //finallystart\nif(rs!=null) rs.close();\nif(s!=null) s.close();\nif(con!=null) con.close();\n} //finallyend\n%>\n\n</body>\n``````\n• Thank you Badal. I'm using SQL Server, not mysql. so I changed your code just a bit, but still it's not working. I get a blank screen when I run this on the IE. I checked the DB to see if the update happened, but it didn't. Here's the updated one: Class.forName(\"com.microsoft.sqlserver.jdbc.SQLServerDriver\"); String sql = \"select Genre from tablename where id= '\" + var1 + \"'\"; String sql1 = \"update tablename set StatusID='0', Status= 'Borrowed (\" + var2 + \")' where id= '\" + var1 + \"'\"; – mathB May 19 '15 at 12:17\n• Are you passing values for var3, var4 variables declared in the earlier code? – Badal May 19 '15 at 12:28\n• Yes, instead of var3 and var4, I used var1 and var2. – mathB May 19 '15 at 12:32\n• Can you share your complete code again ? – Badal May 19 '15 at 12:34\n• Just edited the post again coz, I could not comment it here. Too long for a comment! – mathB May 19 '15 at 12:41" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51269585,"math_prob":0.5996409,"size":2866,"snap":"2019-43-2019-47","text_gpt3_token_len":792,"char_repetition_ratio":0.10342418,"word_repetition_ratio":0.04347826,"special_character_ratio":0.31088626,"punctuation_ratio":0.21284404,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9641592,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T00:48:50Z\",\"WARC-Record-ID\":\"<urn:uuid:1aec3184-651b-4d13-9647-9fbd910b7c41>\",\"Content-Length\":\"135513\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:51880386-344a-4b5a-a419-f9defa965238>\",\"WARC-Concurrent-To\":\"<urn:uuid:401e3d41-0434-4067-880e-c110bf60c78e>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/30321423/multiple-sql-queries-in-jsp\",\"WARC-Payload-Digest\":\"sha1:5OP63IHNMV44YS3755KKRPSYZPU3LZ4A\",\"WARC-Block-Digest\":\"sha1:VPRKX45HK2BZF3FT5JWYLQ7DXQO7BF2K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668544.32_warc_CC-MAIN-20191114232502-20191115020502-00471.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/15-8-plus-30-90
[ "Solutions by everydaycalculation.com\n\n1st number: 1 7/8, 2nd number: 30/90\n\n15/8 + 30/90 is 53/24.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 8 and 90 is 360\n2. For the 1st fraction, since 8 × 45 = 360,\n15/8 = 15 × 45/8 × 45 = 675/360\n3. Likewise, for the 2nd fraction, since 90 × 4 = 360,\n30/90 = 30 × 4/90 × 4 = 120/360\n675/360 + 120/360 = 675 + 120/360 = 795/360\n5. After reducing the fraction, the answer is 53/24\n6. In mixed form: 25/24\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78066486,"math_prob":0.99870497,"size":753,"snap":"2019-51-2020-05","text_gpt3_token_len":308,"char_repetition_ratio":0.15353805,"word_repetition_ratio":0.0,"special_character_ratio":0.52988046,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99746794,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T22:00:02Z\",\"WARC-Record-ID\":\"<urn:uuid:44a1e7c9-5a60-4f93-b795-86d2aa19f14a>\",\"Content-Length\":\"7949\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99983d87-8da0-423d-bba6-7dd7772cad12>\",\"WARC-Concurrent-To\":\"<urn:uuid:9712141a-c677-4104-9db3-f00024d834b9>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/15-8-plus-30-90\",\"WARC-Payload-Digest\":\"sha1:4MUOZHVOJVVPNEISYMD7ASY45JRWHAIG\",\"WARC-Block-Digest\":\"sha1:PPXYZ3I77MLWIOPWZMDFK7B47MLWMVYA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540502120.37_warc_CC-MAIN-20191207210620-20191207234620-00395.warc.gz\"}"}
https://www.maa.org/press/periodicals/loci/joma/creating-mathematical-experience-in-the-classroom-return-times-of-reflected-points
[ "", null, "# Creating Mathematical Experience in the Classroom - Return Times of Reflected Points\n\nAuthor(s):\nJorgen Berglund\n\nShortly after the question of the CSR was posed, I posed a second open question to the students. I first saw this question on the CSU, Chico Department of Mathematics Web page, posted as a challenge question for high school geometry students. According to Jim Jones, Professor of Mathematics at CSU, Chico, no responses were ever received. It has since been replaced by other challenge problems.\n\nI had used the question on several different occasions to illustrate the potential of GSP. The Dynamic Geometry students, save one, first saw the question on their take-home midterm. Each student was instructed to work independently on the question. The student who had seen the problem before was able to draw from that experience, but this did not adversely affect her or the class’s experience.\n\nConstruct a triangle and an arbitrary point on one side. Moving in a counterclockwise direction, reflect the point across the angle bisector of the adjacent angle. Continuing to move in a counterclockwise direction around the triangle, reflect the new point across the next angle bisector. Continue this process. Make and prove conjectures on whether this process will eventually lead back to the original point. Looking at specific types of triangles may lead to partial or additional results.", null, "#### Figure 4: Reflection over Angle Bisectors\n\nMy class of 19 students generated a substantial list of conjectures from this one question. In my role of coordinator, I modified the language but not the main thrust of the various conjectures, and I supplied the following list and instructions to the class. For instance, I introduced the term “return time” to replace the various ways the students described the number of reflections until the point was reflected back to its original position.\n\nA great deal of work has been done on the third problem on the take-home portion of the midterm. In this problem, a point on the side of a triangle is successively reflected across the angle bisectors. This problem is being moved over to the domain of the class project since there seems to be substantive work left. Here I will attempt to list the results so far. This list was quickly compiled, so the person cited was the person on whose paper I first saw the conjecture. My apologies to those who independently made the same or similar conjectures. I will attempt to rectify this in future updates. Since these results were obtained from the take-home exam, they do not count for credit towards the individual’s project grade; however, continued work will. I did not have time to determine which of these conjectures were proven.\n\nConjectures\n\n1. All points on all triangles have a return time of at most six (C. Rojas, et al).\n2. The mid-point of a side of an equilateral triangle has a return time of three (D. Taylor).\n3. The mid-point of the base of an isosceles triangle has a return time of three (B. Campoy).\n4. Every side of a triangle has a point whose return time is three (C. Rojas).\n5. In a triangle, the three points whose return time is three are the three points of tangency of the inscribed circle (L. Cooke).\n6. The points formed by the successive reflections of a point are co-circular (C. Finzell)\n7. Reflecting points across the angle bisectors is equivalent to rotating these points around a point centered at the incenter of the triangle (F. Rocha).\n8. A point, its reflection across an angle bisector, and the vertex of the angle form an isosceles triangle (C. Tedder).\n9. The reflection of a point on the side of an angle across the angle bisector will lie on the line containing the other side of the triangle (D. Taylor).\n10. The vertex of an equilateral triangle will have a return time of five (D. Taylor).\n11. The point on the side of an isosceles triangle whose distance from the vertex is equal to the distance of the point on the second side formed by the intersection of the angle bisector has a return time of three (B. Campoy).\n12. The vertex of an equilateral triangle has a return time of four (B. Campoy).\n13. Given triangle ABC and point X on segment AB, the distance of X to B must be less than the length of BC for the point’s reflection across the angle bisector of B to lie on segment BC (B. Campoy).\n\nThere were indeed proofs and partial proofs contained in students’ work. The following suggested proof was supplied by C. Rojas, who was the one student who had seen this problem when I used it in a previous course. In later communication with me, she indicated that her previous exposure to the problem and the proof then outlined guided her in her work.\n\nConsider", null, "with sides", null, "of lengths", null, ", respectively. (See Figure 5.) Select point", null, "on side", null, "and label the distance from", null, "to vertex", null, "as", null, ". The image of", null, "under a reflection across the angle bisector of", null, ",", null, ", is also a distance", null, "from", null, ", and therefore a distance", null, "from vertex", null, ".\n\nAs a consequence,", null, ", the image of", null, "under a reflection across the angle bisector of angle", null, ", is a distance", null, "from vertex", null, ", and a distance", null, "from vertex", null, ".\n\nThe next reflection across the angle bisector of", null, ", results in a point,", null, ", whose distance from", null, "is", null, ".\n\nThis is followed by a second reflection across the angle bisector of", null, ", resulting in a point", null, "whose distance from", null, "is", null, ".\n\nThe next reflection is across the angle bisector of angle", null, ", and the image,", null, ", is a distance", null, ".\n\nThe sixth and final reflection, across the angle bisector of angle", null, ", results in a point", null, "whose distance from", null, "is", null, ", therefore returning", null, "to the position occupied by the original point", null, ".", null, "#### Figure 5: Return Time of 6 in a General Triangle\n\nThe class was generally impressed by this argument and considered it a valid proof. This provided an opportunity to examine the assumptions underlying the argument.\n\nFor instance, the argument assumes that the images of the points under reflection will lie on the line containing the side of the triangle. Since this can be proven -- in fact was proven by several students -- it provided an example of the role of lemmas in mathematical proofs as well as an understanding of the importance of dialog between mathematicians.\n\nA second issue, one that was not raised in class, is the assumption that all points lie on the sides of the triangle as opposed to the lines containing the sides. In other words, there is an assumption that each subtraction results in a positive value. Since a simple adjustment can be made in Rojas’ argument to address this issue, her argument is substantially correct.\n\nAnother student, D. Taylor, approached the problem from a different perspective. He first proved the following lemmas.\n\nLemma: The reflection of a point on the side of an angle across the angle bisector will lie on the line containing the other side of the angle.\n\nLemma: A point and its reflections are all equidistant from the incenter of the triangle.\n\nLemma: The composition of the reflections across two concurrent lines is equivalent to a rotation about the point of intersection.\n\nThese lemmas led to a proof of the theorem, and in subsequent class discussions the class consensus was that this proof was superior to the first. It not only led to a generalization of the results to n-gons with concurrent angle bisectors, but it suggested a fundamental switch in perspective. The issue was not one about points on the side of a triangle being reflected across angle bisectors, but about arbitrary points being reflected across concurrent lines. The fact that the concurrent lines happened to be angle bisectors of a triangle whose side contained the initial point was seen as incidental.\n\nA number of students had already investigated related questions. C. Lambie had conjectured that a general point taken on the interior or exterior of the triangle had a return time of 6. C. Finzel, M. Elliot, and B. Salam had made the same conjecture about reflections across perpendicular bisectors, altitudes, and medians of a triangle. The class noted with appreciation that all of these conjectures were special cases of a more general theorem that reflections about concurrent lines will have a return time related to the number of lines.\n\nOthers, such as L. Miller, had found experimentally that for most (non-regular) quadrilaterals, the reflection points did not return, and that the cases in which they did return were precisely the cases were the angle bisectors were concurrent. We felt as though we had come closer to the deeper, more essential, relationship.\n\nWe briefly discussed the question of return times less than 6. It was not clear if D. Taylor’s proof would account for these, but apparently C. Rojas’ proof could. D. Garber then analyzed D. Taylor’s proof and claimed to have shown how return times of 5 and 3 could be accounted for, at least in certain cases. There were others who investigated those points that had a return time of less than 6, finding that there exist points with a return time of 5 (M. Hellman, M. Elliot, C. Tedder).\n\nOne last area of contribution was an attempt to systematize the results of the point-reflection investigations. This proved to be very challenging and led to a class activity in which various groups presented suggested systems. These were of various degrees of sophistication but led to a general realization that, in order to systematize the results, one needs to know a proof of each of the results. The discussion ended with a recognition of the role proof plays in systematization of mathematical theorems (de Villiers, 1999).\n\nJorgen Berglund, \"Creating Mathematical Experience in the Classroom - Return Times of Reflected Points,\" Convergence (September 2005)\n\n## Dummy View - NOT TO BE DELETED\n\n•", null, "•", null, "•", null, "•", null, "•", null, "" ]
[ null, "https://px.ads.linkedin.com/collect/", null, "https://www.maa.org/sites/default/files/images/cms_upload/fig440350.GIF", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn0748477.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn0849610.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn0950758.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1051903.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1153053.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1051903.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1254197.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1355346.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1051903.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1254197.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1355346.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/x56495.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1254197.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1457639.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1558785.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1659932.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1355346.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1558785.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1457639.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1558785.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1701083.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1802249.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1802249.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1903398.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1254197.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn2004545.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1254197.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn2105697.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1558785.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn2206845.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1558785.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn2307990.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn2409141.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1802249.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn2510284.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1254197.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn2611439.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn2510284.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/eqn1051903.gif", null, "https://www.maa.org/sites/default/files/images/cms_upload/fig506853.GIF", null, "https://www.maa.org/sites/default/files/21.03.08%20MAA%20Press%20Book.png", null, "https://www.maa.org/sites/default/files/21.07.26%20MathFest%202021%20Registration%20Reminder.png", null, "https://www.maa.org/sites/default/files/21.07.26%20MathFest%202021%20Chronological%20Schedule.png", null, "https://www.maa.org/sites/default/files/EOY%20Homepage%20Slider.png", null, "https://www.maa.org/sites/default/files/Service%20Center%20Update%20Homepage%20Slider.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9556895,"math_prob":0.9114147,"size":9463,"snap":"2021-31-2021-39","text_gpt3_token_len":1968,"char_repetition_ratio":0.16280791,"word_repetition_ratio":0.080024436,"special_character_ratio":0.20205009,"punctuation_ratio":0.10304709,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9879765,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,8,null,2,null,8,null,null,null,6,null,8,null,null,null,6,null,2,null,null,null,4,null,10,null,2,null,6,null,10,null,4,null,10,null,2,null,6,null,6,null,2,null,null,null,2,null,null,null,2,null,10,null,2,null,10,null,2,null,2,null,6,null,4,null,null,null,2,null,4,null,8,null,2,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-29T09:39:59Z\",\"WARC-Record-ID\":\"<urn:uuid:63c4cd7a-8caa-49e7-b66d-1e8cb08ed095>\",\"Content-Length\":\"130686\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:39de91b0-d71c-4275-ba1d-5f4058d5b930>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b936357-a7b4-43f6-9908-72f2cab8bca5>\",\"WARC-IP-Address\":\"192.31.143.111\",\"WARC-Target-URI\":\"https://www.maa.org/press/periodicals/loci/joma/creating-mathematical-experience-in-the-classroom-return-times-of-reflected-points\",\"WARC-Payload-Digest\":\"sha1:R4GUN6QJN2CQYDISOD4KFMKUM2HTBSHO\",\"WARC-Block-Digest\":\"sha1:XGXGQW44AXKKSQWODBUKWSTLFOZ2VS3Q\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153854.42_warc_CC-MAIN-20210729074313-20210729104313-00211.warc.gz\"}"}
https://www.icserankers.com/2021/07/frank-icse-solutions-for-practical-work-class10-chemistry.html
[ "# Frank Solutions for Chapter 12 Practical Work Class 10 Chemistry ICSE\n\n1. How will you identify ?\n(a) Chloride ion\n(b) Chlorine gas\n(c) Sulphur dioxide gas\n(d) A soluble carbonate\n(e) An amphoteric hydroxide\n\n(a) Chloride ion: A small amount of the salt is taken in a test tube and conc. H2SO4 is added to it and then test tube is warmed, if a colourless gas with pungent odour is evolved then chloride ions are present in the salt. It can be confirmed by bringing a glass rod dipped in ammonia solution near the gas evolved, if dense white fumes are formed then presence of chloride ions is confirmed.\n\n(b) It is a greenish yellow coloured gas with sharp pungent smell. It turns moist blue litmus paper red and finally bleaches it.\n\n(c) It is a colourless gas with a smell of burnt sulphur. It turns moist blue litmus paper red.\n\n(d) A small quantity of salt is taken in a test tube, dilute sulphuric acid is added to it. If a brisk effervescence is observed, then the gas is passed through lime water, If carbonate ion is present in the salt, the solution turns milky.\n\n(e) Zinc hydroxide is an amphoteric hydroxide. It can be identified by treating the salt solution with sodium hydroxide, if a white ppt. is formed and if it gets dissolved in excess of sodium hydroxide then it may be zinc hydroxide .\n\n2. Name the following  :\n(a) A green coloured carbonate.\n(b) A steel grey coloured solid non-metal.\n(c) A nitrate which has no water of crystallization.\n(d) A sulphate insoluble in water.\n\n(a) Copper carbonate\n(b) Selenium\n(c) Nitre: a nitrate of potassium\n(d) Anhydrous calcium sulphate\n\n3. Give two tests for each of the following :\n(a) NH3\n(b) Oxygen\n(c) Water vapour\n\n(a) NH3: It has a strong pungent smell and turns moist red litmus blue. It can also be tested by bringing a glass rod dipped in conc. HCI in contact with the gas, if the gas is ammonia then it will produce dense white fumes near the glass rod.\n\n(b) Oxygen: It is an odourless and colourless gas and turns alkaline pyrogallol brown. It can also be tested by bringing a lighted splinter near the gas, if the splinter starts glowing, the gas is oxygen.\n\n(c) Water Vapour: These are colourless vapours and are neutral to litmus. These can be tested by first condensing on the walls (cooler parts) of the test tube and then by adding anhydrous copper sulphate to the collected liquid , if the white colour of copper sulphate changes to blue, then the gas is water vapour.\n\n4. Write the steps needed for flame test ?\n\nFlame test :\n\n1. Make a loop at the tip of the platinum wire and dip it in conc. HCl.\n2. Put it on the non luminous part of the flame, to see if gives colour. Repeat the process till it gives no colour to the flame.\n3. Prepare a paste of the given salt in a watch glass using conc. HCl.\n4. Load the loop of the wire with this prepared paste and introduce it into the non luminous flame of the bunsen burner and then observe the colour of the flame indicating different elements.\n\n5. What happens when\n(a) NH3 solution is added to CuSO4 solution drop by drop and then in excess.\n(b) Caustic soda solution is added to Cu(NO3)2 solution and product boiled.\n(c) Common salt solution is added to silver nitrate solution and NH3 solution add to it.\n(d) Lead nitrate solution treated with calcium chloride solution and products are heated and cooled.\n\n(a) When NH3 solution is added to CuSO4 solution drop by drop, ammonium ions completely react with copper sulphate and precipitates of copper hydroxide are formed", null, "But when we add excess of ammonia, the precipitate dissolves and a soluble complex is formed.", null, "(b) When Caustic soda solution is added to Cu(NO3)2 solution, the following reaction takes place :", null, "when we heat it further, the greenish blue copper hydroxide decomposes form slightly black ppt. of copper oxide.", null, "(c) Common salt solution is added to silver nitrate solution.", null, "Common salt in NaCl, in aqueous medium it ionizes to form Na+ and Cl- ions.", null, "6. Write sqauential observation for effect of heat on\n(a) Copper nitrate\n(c) Ammonium chloride\n\n(a)\n\n1. Bluish green crystalline solid, on heating, melts to form a bluish green mass and gives off steamy vapours which condense on the cooler part of the test tube.\n2. On further heating, the bluish green mass changes to a black residue.\n3. It gives off a reddish brown gas and gives a gas which rekindles a glowing splinter, i.e. oxygen.\n(b)\n\n1. A white solid turns yellow on heating.\n2. Gives off a gas which extinguishes a burning wooden splinter.\n3. Gas evolved turns lime water milky.\n(c)\n\n1. When ammonium chloride is heated in a test tube, the lighter ammonia gas will emerge first and turn a piece of moist red litmus paper blue.\n2. Hydrogen chloride coming up next will change the litmus paper from blue back to red.\n\n7. How will you distinguish between the two black samples, CuO and MnO2 with a chemical test ?\n\nAdd concentrated hydrochloric acid to both the samples. Only MnO2 releases greenish yellow chlorine gas.\n\n8. A white solid a when heated with sodium hydroxide solution, gives a pungent gas B, which turns red litmus blue. The solid, when dissolved in dilute nitric acid and treated with silver nitrate gives a white precipitate of C which is soluble in an ammonia solution.\n\nC is silver chloride which is soluble in ammonia.\nPungent smelling gas B is ammonia.\nWhite solid A is ammonium chloride", null, "9.  Write the approximate colour of the following samples with the universal indicator :\n(a) Distilled water\n(b) Acid rain\n(c) Soap solution\n(d) Soil containing slaked lime\n(e) Gastric juice\n\n(a) Distilled water : same as that of universal indicator\n(b) Acid rain : red\n(c) Soap solution : purple\n(d) soil containing slaked lime : green\n(e) Gastric juices : orange\n\n10. Using sodium hydroxide solution, how would you distinguish :\n(a) Ammonium sulphate from sodium sulphate.\n(b) Zinc nitrate solution from calcium nitrate solution .\n(c) Iron (II) chloride from Iron (III) chloride.\n\n11. Describe in each case, one chemical test that would enable you to distinguish between following pairs of chemicals. Describe what happens with each chemical or state 'no visible reaction'.\n(i) Sodium chloride solution and sodium nitrate solution.\n(ii) Sodium sulphate solution and sodium chloride solution.\n(iii) Calcium nitrate solution and zinc nitrate solution.\n\n(i) Sodium chloride solution and sodium nitrate solution can be distinguished by using conc. Sulphuric add. To the salt solution, add freshly prepared ferrous sulphate solution and pour a few drops of conc. H2SO4  along the sides of the tube. If it's sodium nitrate solution then a brown ring would appear at the junction of the two liquid layers. But if its sodium chloride solution, it would not undergo any visible reaction.\n\n(ii) Sodium sulphate solution and sodium chloride solution can be distinguished by using barium chloride solution. Barium chloride solution on being added to sodium sulphate solution forms a white precipitate which is insoluble in conc. HCI. Whereas sodium chloride shows no reaction with barium chloride solution.\n\n(iii) Calcium nitrate solution and zinc nitrate solution can be distinguished by sodium hydroxide. Sodium hydroxide reacts with zinc nitrate and zinc hydroxide is formed which is soluble in excess of sodium hydroxide.", null, "Sodium hydroxide reacts with calcium nitrate and forms white ppt. of calcium hydroxide insoluble in excess of sodium hydroxide.", null, "12. State the effect of adding a small amount of : (a) sodium hydroxide (b) ammonium hydroxide followed by an excess in each case to samples of each of the salt solutions.\n(i) Calcium nitrate [ small amount ________ in excess ______].\n(ii) Zinc nitrate [small amount _______ in excess ______ ].\n(iii) Lead nitrate [ small amount _______ in excess _______].\n\n13. State what do you observe when ammonium hydroxide to Iron (III) sulphate solution.\n\n14. From the formulae listed below, choose, one, corresponding to the salt having the given description :AgCl, CuCO3, CuSO4∙5H2O, KNO3, NaCl, NaHSO4, Pb(NO3)2, ZnCO3, ZnSO4∙ 7H2O.\nOn heating this, salt changes from green to black.\n\nOn heating CuCO3, CuO is formed which is black in colour.\n\n15. (i) How would you distinguish between Zn2+ and Pb2+ using ammonium hydroxide solution ?\n(ii) Copy and complete the following table which refers to the action of heat on some carbonates :\n\n Carbonate Colour of residue on cooling Zinc carbonateLead carbonateCopper carbonate\n\n16. Write the observation for the following:\n(i) NaOH is added drop-wise till in excess to a solution of zinc sulphate.\n(ii) NH4OH is added first in a small quantity and then in excess to a solution of copper sulphate.\n(iii) Excess NH4OH is added to a substance obtained by adding hydrochloric add in silver nitrate solution.\n(iv) Moist starch iodide paper is put on the mouth of a test tube containing chlorine gas.\n(v) A paper dipped in potassium permanganate solution is put on the mouth of a test tube containing sulphur dioxide gas.\n(vi) Decomposition of bicarbonates by dil. H2SO4  .", null, "", null, "(iv) Chlorine gas turns moist starch iodide paper black.\n(v) Purple colour of potassium permanganate gets discharged.\n(vi) Brick effervescence is observed and colourless and odourless CO2 gas is evolved in both the cases.\n\n17. Sodium hydroxide solution is added first in a small quantity, then in excess to the aqueous salt solutions of copper (II) sulphate, zinc nitrate, lead nitrate , calcium chloride and iron (III) sulphate. Copy the following table and write the colour of the precipitate in (i) to (v) and the nature of the precipitate (soluble or insoluble ) in (vi) to (x).\n\n Aqueous salt solution Colour of precipitate when NaOH is added in a small quantity Nature of precipitate (soluble or insoluble )when NaOH is added in excess Copper (II) sulphateZinc nitrateLead nitrateCalcium chlorideIron (III) Sulphate (i)(ii)(iii)(iv)(v) (vi)(vii)(viii)(ix)(x)\n\n Aqueous salt solution Colour of precipitate when NaOH is added in a small quantity Nature of precipitate (soluble or insoluble) when NaOH is added in excess Copper (II) sulphateZinc nitrateLead nitrateCalcium chlorideIron (III) Sulphate (i) Pale blue (ii) White(iii) White(iv) White curdy ppt.(v) Reddish brown (vi) insoluble (vii) soluble (viii) soluble(ix) insoluble (x) insoluble\n\n18. A solution of hydrogen chloride in water is prepared. The following substances are added to separate portions of the solution :\n\n S. No. Substance added Gas evolved Odour 1. Calcium carbonate2. Magnesium ribbon3. Manganese (IV) oxide with heating4. Sodium sulphide ____________ ________\n\n S. No. Substance added Gas evolved Odour 1. Calcium carbonate2. Magnesium ribbon3. Manganese (IV) oxide with heating4. Sodium sulphide Co2  H2  Cl2  H2S odourlessodourlesspungentsmell of rotten eggs\n\n19. The questions (i) to (v) refer to the following salt solutions listed (a) to (f):\n(a) Copper nitrate\n(b) Iron (II) sulphate\n(c) Iron (III) chloride\n(e) Magnesium sulphate\n(f) Zinc chloride\n(i) Which two solutions will give a white precipitate when treated with dilute hydrochloric acid followed by barium chloride solution?\n(ii) Which two solutions will give a white precipitate when treated with dilute nitric acid followed by silver nitrate solution?\n(iii) Which solution will give a white precipitate when either dilute hydrochloric acid or dilute sulphuric acid is added to it?\n(iv) Which solution becomes a deep/inky blue colour when excess of ammonium hydroxide is added to it?\n(v) Which solution gives a white precipitate with excess ammonium hydroxide solution?\n\n(i) Magnesium sulphate, Iron(II) sulphate\n(ii) Zinc chloride, Iron (III) chloride\n(iv) Copper nitrate\n(v) Zinc chloride\n\n20. (a) State the colour of the residue formed when nitrates of\n1. Calcium\n2. Zinc\n4. Copper are strongly heat\n(b) Give one test each to distinguish between the following pairs of chemicals.\n1. Zinc nitrate solution and calcium nitrate solution.\n2. Sodium nitrate solution and sodium chloride solution.\n3. Iron (III) chloride solution and copper chloride solution.\n\n(a) 1. Calcium: white\n2. Zinc: white\n4. Copper: black\n\n(b) 1. Zinc nitrate and calcium nitrate solution can be distinguished by reaction with ammonium hydroxide. Zinc forms a white gelatinous ppt. whereas there is no precipitation of calcium hydroxide even with excess of ammonium hydroxide.", null, "1. (a) Sodium chloride solution and sodium nitrate solution can be distinguished by using conc. Sulphuric acid. To the salt solution, add freshly prepared ferrous sulphate solution and pour a few drops of conc. H2SO4 along the sides of the tube. If it's sodium nitrate solution then a brown ring would appear at the junction of the two liquid layers. But if its sodium chloride solution, it would not undergo any visible reaction.\n2. Iron (III) chloride solution and copper chloride solution can be distinguished by using ammonium hydroxide. Copper forms a blue ppt. of Cu(OH)2 which is soluble in excess of ammonium hydroxide.\nCuCl2  + 2NH4OH ⟶ Cu(OH)2 + 2NH4Cl\nWhereas iron (III) forms a reddish brown ppt. of Fe(OH)3 which is insoluble even in excess of ammonium hydroxide.\nFeC13  + 3NH4OH ⟶ Fe(OH)3  + 3NH4Cl\n\n21.\n\n Column A Column B 1. A substance that turns moist starch iodide paper blue. C. Chlorine 2. A compound which releases a reddish brown gas on reaction with concentrated sulphuric acid and copper turnings. D. Copper nitrate 3. A solution of this compound gives a dirty green precipitate with sodium hydroxide. E. Ferrous sulphate 4. A compound which on heating with sodium hydroxide produces a gas which forms dense white fumes with hydrogen chloride. A. Ammonium sulphate 5. A white solid which gives a yellow residue on heating B. Lead carbonate\n\nMatch the following :\n\n Column A Column B 1. A substance that turns moist starch iodide paper blue. A. Ammonium sulphate 2. A compound which releases a reddish brown gas on reaction with concentrated sulphuric acid and copper turnings. B. Lead carbonate 3. A solution of this compound gives a dirty green precipitate with sodium hydroxide. C. Chlorine 4. A compound which on heating with sodium hydroxide produces a gas which forms dense white fumes with hydrogen chloride. D. Copper nitrate 5. A white solid which gives a yellow residue on heating E. Ferrous sulphate\n\n22. Salts A, B, C, D and E undergo reactions (i) to (v) respectively. Identify the anion present in these salts on the basis of these reactions. Tabulate your answers in the format given below:\n(i) When silver nitrate solution is added to a solution of A, a white precipitate, insoluble in dilute nitric acid, is formed.\n(ii) Addition of dilute hydrochloric acid to B produces a gas which turns lead acetate paper black.\n(iii) When a freshly prepared solution of ferrous sulphate is added to a solution of C and concentrated sulphuric acid is gently poured from the side of the test, a brown ring is formed.\n(iv) When dilute sulphuric acid is added to D, a gas is produced which turns acidified potassium dichromate solution from orange to green.\n(v) Addition of dilute hydrochloric acid to E produces an effervescence. The gas produced turns limewater milky but does not affect acidified potassium dichromate solution.\n\n Salt Anion A B C D E\n\n23. Choose the correct answer :\n\nThe salt which in solution gives a pale precipitate with sodium hydroxide solution and a white precipitate with barium chloride solution is :\nA. Iron (III) sulphate\nB. Iron (II ) sulphate\nC. Iron (II) chloride\nD. Iron (III) chloride\n\nIron (II) sulphate\n\n24. Select from the list given (a to e) one substances in each case which matches the description given in parts (i) to (v). (Note : Each substance is used only one in the answer)\n(a) Nitroso Iron (II)\n(b) Iron (III) chloride\n(c) Chromium sulphate\n(e) Sodium chloride\n(i) A compound which is deliquescent\n(ii) A compound which is insoluble in cold water, but soluble in hot water\n(iii) The compound responsible for the brown ring during the brown ring test of nitrate iron\n(iv) A compound whose aqueous solution is neutral in nature\n(v) The compound which is responsible for the green colouration when sulphur dioxide is passed through acidified potassium dichromate solution\n\n(i) Iron (III) chloride\n(iii) Nitroso iron (II) sulphate\n(iv)Sodium chloride\n(v) Chromium sulphate\n\n25. What would you observe in the following cases:\nAmmonium hydroxide is first added in a small quantity and then in excess to a solution of copper sulphate.\n\nOn addition of ammonium hydroxide in a small quantity, a blue-coloured copper hydroxide precipitate is formed. This copper hydroxide of light blue colour dissolves in excess of ammonium hydroxide to yield a deep blue solution.\n\n26. Sodium hydroxide solution is added to the solutions containing the ions mentioned in List X. List Y gives the details of the precipitate. Match the ions with their coloured precipitates.\n\n List X List Y (i) Pb2+(ii) Fe2+(iii) Zn2+(iv) Fe3+(v) Cu2+(vi) Ca2+ (A) Reddish Brown(B) White insoluble in excess(C) Dirty green(D) White soluble in excess(E) White soluble in excess(F) Blue\n\n(i) D\n(ii) C\n(iii) E\n(iv) A\n(v) F\n(vi) B\n\n27.  State two observations when\n(i) Lead nitrate crystals are heated in a hard glass test tube.\n(ii) A few crystals of KNO3 are heated in a hard glass tube\n\n(i) Lead nitrate decrepitates on heating; a yellow solid is formed and it fuses with glass. Lead nitrate decomposes to lead oxide, nitrogen dioxide and oxygen.\n(ii) Oxygen is evolved.\n2KNO3→ 2KNO2 + O2\n\n28. Give a chemical test to distinguish between the following pairs of compounds:\n(i) Sodium chloride solution and sodium nitrate solution\n(ii) Hydrogen chloride gas and hydrogen sulphide gas\n(iii) Calcium nitrate gas and sulphur diaoxide gas\n(iv) Carbon dioxide gas and sulphur dioxide gas\n\n(i) Add silver nitrate solution to both solutions. Sodium chloride will form a curdy white ppt., whereas sodium nitrate will not undergo any reaction.\n(ii) Hydrogen chloride gas gives thick white fumes of ammonium chloride when a glass rod dipped in ammonia solution is held near the vapour of the acid, whereas no white fumes are observed in case of hydrogen sulphide gas.\n(iii) Calcium nitrate forms no ppt. even with addition of excess of NH4OH, whereas zinc nitrate forms a white gelatinous ppt. which dissolves in excess of NH4OH.\n(iv) Carbon dioxide gas has no effect on acidified KMnO4 or K2Cr2O7, but sulphur dioxide turns potassium permanganate from pink to colourless.\n\n29. Distinguish between the following pairs of compounds using the test given with brackets :\nDilute sulphuric acid and dilute hydrochloric acid (using barium chloride solution)\n\nSulphuric acid precipitates the insoluble sulphate of barium from the solution of barium chloride.\nBaCl2 + H2SO4→ BaSO4 + 2HCl\nDilute HCl does not react with barium chloride solution, and thus, no precipitate is produced in the reaction.\n\n30. State the inference drawn from the following observations :\n(i) On Carrying out the flame test with a salt P a brick red flame was obtained. What is the cation in P?\n(ii) A gas Q turns moist lead acetate paper silvery black. Identify the gas Q.\n(iii) pH of liquid R is 10. What kind of substance is R?\n(iv) Salt S is prepared by reacting dilute sulphuric acid with copper oxide Identify S.\n\n(i) On carrying out the flame test with a salt P, a brick red flame is obtained. Hence, the cation P is Ca2+.\n(ii) A gas Q turns moist lead acetate paper silvery black. Hence, the gas is H2S.\n(iii) pH of liquid R is 10. Hence, substance R is a base.\n(iv) Salt S is prepared by reacting dilute sulphuric acid with copper oxide. Hence, salt S is copper sulphate.\n\n31. State your observation in each of the following cases:\n(i) When dilute hydrochloric acid is added to sodium carbonate crystals\n(ii) When excess sodium hydroxide is added to calcium nitrate solution\n(iii) At the cathode when acidified aqueous copper sulphate solution is electrolyzed with copper electrodes\n(iv) When calcium hydroxide is heated with ammonium chloride crystals\n(v) When moist starch iodide paper is introduced into chlorine gas\n\n(i) Sodium carbonate crystals on reaction with dilute HCl form sodium chloride, water and carbon dioxide, which is evolved with brisk effervescence. This is a neutralisation reaction as sodium carbonate is a basic salt, while hydrochloric acid is an acid. The chemical equation for this reaction is as follows:\nNa2CO3 + 2HCl → 2NaCl +H2O + CO2\n(ii) Calcium nitrate solution on reaction with excess of sodium hydroxide produces calcium hydroxide and sodium nitrate. Calcium nitrate reacts with excess of sodium hydroxide to form a white precipitate of calcium hydroxide, which is sparingly soluble, and colourless sodium nitrate. The reaction is as follows:\nCa(NO3)2 + 2NaOH → Ca(OH)2 + 2NaNO3\n(iii) Acidified aqueous copper sulphate solution is electrolysed with copper electrodes by electrolysis. The electrolysis of an aqueous solution of copper sulphate using copper electrodes (i.e. using active electrodes) results in the transfer of copper metal from the anode to the cathode during electrolysis. Copper sulphate is ionised in aqueous solution.\nChemical equation:\nCuSO4 → Cu2+ + SO42-\nThe positively charged copper ions migrate to the cathode, where each gains two electrons to become copper atoms which are deposited on the cathode.\nCu2+ + 2e- → Cu\nHence, the colour of copper sulphate changes from blue to colourless.\n(iv) When ammonium chloride is heated with calcium hydroxide, ammonia gas is released.\n2NH4Cl + Ca(OH)2→ CaCl2 + 2NH3 + 2H2O\nThe liberated gas turns red litmus blue.\n(v) When moist starch iodide paper is introduced into chlorine gas, chlorine oxidises iodide to iodine, which shows up as blue when it forms a complex with starch.\n\n32. The following table shows the tests a student performed on four different aqueous solutions which are X,Y,Z and W. Based on the observations provided, Identify the cation present\n\n Chemical Test Observation Conclusion To Solution X, ammonium hydroxide is added in minium quantity first and then in excess A dirty white precipitate is formed which dissolves in excess to form a clear solution (i) To Solution Y, ammonium hydroxide is added in minimum quantity first and then in excess A pale blue precipitate is formed which dissolves in excess to form a clear inky blue solution (ii) To solution W, A small quantity of sodium hydroxide solution is added and then in excess A white precipitate is formed which remains insoluble (iii) To a salt Z, calcium hydroxide solution is added and then heated A pungent smelling gas turning moist red litmus paper blue is obtained (iv)\n\n(i) Zn2+\n(ii) Cu2+\n(iii) Ca2+\n(iv) NH4+\n\n33. Identify the anion present in each of the following compounds :\n(i) A salt M on treatment with concentrated sulphuric acid produces a gas which fumes in moist air and gives dense fumes with ammonia\n(ii) A salt D on treatment with dilute sulphuric acid produces a gas which turns lime water milky but has no effect on acidified potassium dichromate solution\n(iii) When barium chloride solution is added to salt solution E a white precipitate insoluble in dilute hydrochloric acid is obtained\n\n(i) Chloride ion (Cl-)\n(ii) Carbonate (CO32-)\n(iii) Sulphate (SO42-\n\n34. From the list of the following salts choose the salt that most appropriately fits the description given in the following :\n[AgCl,MgCl2,NaHSO4,PbCO3,ZnCO3,KNO3,Ca(NO3)2]\n(i) A deliquescent salt\n(ii) An insoluble chloride\n(iii) On heating this salt gives a yellow residue when hot and white when cold\n(iv) On heating this salt, a brown coloured to prepare the following salts:\n\n(i) A deliquescent salt: MgCl2\n(ii) An insoluble chloride: AgCl\n(iii) On heating, this salt gives a yellow residue when hot and a white residue when cold: ZnCO3\n(iv) On heating this salt, a brown coloured gas is evolved: Ca(NO3)2\n\n35. Give balanced chemical equations to prepare the following salts:\n(ii) Sodium sulphate using dilute sulphuric acid\n(iii) Copper chloride using copper carbonate\n\n36. Identify the cations in each of the following case:\n(i) NaOH solution when added to solution (A) gives a reddish brown precipitate\n(ii) NH4OH solution when added to solution (B) gives white ppt. which does not dissolve in excess.\n(iii) NaOH solution when added to solution (C) gives white ppt. which insoluble in excess", null, "37. Identify the gas evolved and give the chemical test in each of the following cases:\n(i) Dilute hydrochloric acid reacts with sodium sulphite\n(ii) Dilute hydrochloric acid reacts with iron (II) sulphide\n\n(i) Sulphur dioxide\nFreshly prepared K2Cr2O7 paper changes from orange to green.\n(ii) Hydrogen sulphide\nThe gas released has a rotten egg smell.\n\n38. Identify the salts P and Q from the observations given below:\n(i) On performing the flame test salt P produces a lilac coloured flame and its solution gives a white precipitate with silver nitrate solution. Which is soluble in ammonium hydroxide solution.\n(ii) When dilute HCl is added to a salt Q, a brisk effervescence is produced and the gas turns lime water milky. When NH4OH soltion is added to the above\nmixture (after adding dilute HCl), it produces a white precipitate which is\nsoluble in excess NH4OH solution." ]
[ null, "https://1.bp.blogspot.com/-fGrULELx44U/YNQvFQCsznI/AAAAAAAAIS8/4uBgj-Vz02Q8GiecQuqthIDACFd09tE1ACLcBGAsYHQ/w557-h26/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B1.PNG", null, "https://1.bp.blogspot.com/-1ncsxkPygaI/YNQwN9NUJlI/AAAAAAAAITM/qDfiYY4_gRgTkZgw7xyCW-HXvTQe_p6AQCLcBGAsYHQ/w387-h28/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B2.PNG", null, "https://1.bp.blogspot.com/-GvWaY0xkJaw/YNQw6RVdCFI/AAAAAAAAITc/gH0YkqVjXWgbNI5WlkN3SeJmW_0KpzQ9gCLcBGAsYHQ/w508-h22/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B3.PNG", null, "https://1.bp.blogspot.com/-uG5U-5nESt0/YNQxjXeGvkI/AAAAAAAAITk/vigqHbfiow4sBjKBA5C2Wll5mEWWdWluwCLcBGAsYHQ/w296-h30/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B4.PNG", null, "https://1.bp.blogspot.com/-XuUmcVVKR5U/YNQx0m9EiLI/AAAAAAAAITs/cpfYg65sL6oia1_4lZCOpwfIkSxHpttagCLcBGAsYHQ/w481-h24/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B5.PNG", null, "https://1.bp.blogspot.com/-70WdrVtcXFM/YNQzICpzH6I/AAAAAAAAIT0/3sgHbHdJe0MLh8tJZG4eNohKQRuvGtrfACLcBGAsYHQ/w643-h372/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B6.PNG", null, "https://1.bp.blogspot.com/-Bfjibx7hsLc/YNQ07-7d9TI/AAAAAAAAIUE/511cbGaw9P8o-JGKhG2Wo9HHbYDKQI55QCLcBGAsYHQ/w269-h120/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B7.PNG", null, "https://1.bp.blogspot.com/--Jpcnd1LjGA/YNRBIBbzuhI/AAAAAAAAIUU/Wuws2mMB--A6kCeM-4J1eWIZ5rICxidfwCLcBGAsYHQ/w431-h26/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B9.PNG", null, "https://1.bp.blogspot.com/-AB7xiNynIh8/YNRBSqQ-LHI/AAAAAAAAIUY/A9D-Pw44-7sJvLOcRevZOlLx22pjSpruACLcBGAsYHQ/w414-h25/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B10.PNG", null, "https://1.bp.blogspot.com/-R-q23UOLN6M/YNRMQlsTVjI/AAAAAAAAIU8/L234XMo5zykcAKgiCCPi2yJJQDckC2QIgCLcBGAsYHQ/w321-h45/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B14.PNG", null, "https://1.bp.blogspot.com/-fc-40SGKuFo/YNRMtrYU_dI/AAAAAAAAIVE/XgvfDVZaX_kmlraJhYDAcsfM7jfys_dSgCLcBGAsYHQ/w598-h495/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B15.PNG", null, "https://1.bp.blogspot.com/-cfSNmgLDj5E/YNRXOAauDZI/AAAAAAAAIVM/e0hSpPSo2gMekEURayRr00uJtpqm4rKZwCLcBGAsYHQ/w345-h58/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B16.PNG", null, "https://1.bp.blogspot.com/-LPcSV93W3Lw/YNRmY3qtLYI/AAAAAAAAIVc/uc_S_BKVFkc_oDN5BwweU-dv9KNmBSDWwCLcBGAsYHQ/w476-h217/Frank%2BSolution%2BClass%2B10%2BCh%2B12%2BPractical%2BWork%2Bimg%2B18.PNG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86114365,"math_prob":0.8026576,"size":24940,"snap":"2023-40-2023-50","text_gpt3_token_len":6193,"char_repetition_ratio":0.18258743,"word_repetition_ratio":0.15129411,"special_character_ratio":0.22493985,"punctuation_ratio":0.094989106,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9549695,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T18:41:09Z\",\"WARC-Record-ID\":\"<urn:uuid:57e72bce-386f-43eb-ac6c-eb62e523d2c7>\",\"Content-Length\":\"333361\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1e82b15-a8f6-4d44-89b6-6b4cad17619c>\",\"WARC-Concurrent-To\":\"<urn:uuid:15461ed1-dff1-46bd-8a8f-d39e8700868a>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"https://www.icserankers.com/2021/07/frank-icse-solutions-for-practical-work-class10-chemistry.html\",\"WARC-Payload-Digest\":\"sha1:QVP3GOMTUPNZ2EG6YAONA676V3EVDABA\",\"WARC-Block-Digest\":\"sha1:SXHWEUO3UVVRLTVHEZEJNLMAMVR57JIA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511170.92_warc_CC-MAIN-20231003160453-20231003190453-00329.warc.gz\"}"}
https://www.imes.boj.or.jp/research/abstracts/english/11-E-20.html
[ "Discussion Paper Series 2011-E-20\n\n# Analytical Solution for the Loss Distribution of a Collateralized Loan under a Quadratic Gaussian Default Intensity Process\n\n## Satoshi Yamashita, Toshinao Yoshiba\n\nIn this study, we derive an analytical solution for expected loss and the higher moment of the discounted loss distribution for a collateralized loan. To ensure nonnegative values for intensity and interest rate, we assume a quadratic Gaussian process for default intensity and discount interest rate. Correlations among default intensity, discount interest rate, and collateral value are represented by correlations among Brownian motions driving the movement of the Gaussian state variables. Given these assumptions, the expected loss or the m-th moment of the loss distribution is obtained by a time integral of an exponential quadratic form of the state variables. The coefficients of the form are derived by solving ordinary differential equations. In particular, with no correlation between default intensity and discount interest rate, the coefficients have explicit closed form solutions. We show numerical examples to analyze the effects of the correlation between default intensity and collateral value on expected loss and the standard deviation of the loss distribution.\n\nKeywords: default intensity; stochastic recovery; quadratic Gaussian; expected loss; measure change\n\nViews expressed in the paper are those of the authors and do not necessarily reflect those of the Bank of Japan or Institute for Monetary and Economic Studies." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8987214,"math_prob":0.9363726,"size":1394,"snap":"2022-40-2023-06","text_gpt3_token_len":236,"char_repetition_ratio":0.14316547,"word_repetition_ratio":0.019704433,"special_character_ratio":0.16714491,"punctuation_ratio":0.09210526,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9718337,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T13:32:38Z\",\"WARC-Record-ID\":\"<urn:uuid:c60e8e27-3aa6-4c1b-b2a8-a939ec24c543>\",\"Content-Length\":\"3487\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc53e1e1-f8aa-42bf-ac25-2ffb10f5f976>\",\"WARC-Concurrent-To\":\"<urn:uuid:69825d15-3f75-4d77-a147-acc0b702d638>\",\"WARC-IP-Address\":\"23.212.249.85\",\"WARC-Target-URI\":\"https://www.imes.boj.or.jp/research/abstracts/english/11-E-20.html\",\"WARC-Payload-Digest\":\"sha1:AMSHFBNNJWPYW3OASWOKJVUNXW43GAXC\",\"WARC-Block-Digest\":\"sha1:UIELB5OARGERUCTBL7ASV6MRY4JML5CX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334579.46_warc_CC-MAIN-20220925132046-20220925162046-00307.warc.gz\"}"}
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/18773
[ "```On Mon, Jul 30, 2001 at 08:10:04AM +0900, MikkelFJ wrote:\n> > This seems bad, or do you have a weird definition of multiplication\n> > and division that makes it work out? I always thought even the\n> > integers mod something were a ring and not a field. Rational support\n> > seems like the only sane option.\n\nintegers mod x are a field when x is a prime number, otherwise its a ring.\n\nwhen you take N (the integers) mod a prime number, the multiplicative\ninverse of all (resulting) numbers exists, and therefore it is a field.\n\nYou than define the divison as the inverse of the multiplication - thus the\nresults are different from \"normal\" divison.\n\ne.g:\n\nIntegers mod 5 gives the numbers\n0\n1\n2\n3\n4\n\nand:\n\n2*3 = 1 (because 2*3 would give 6, and 6 mod 5 = 1)\ntherefore 2 is the multiplicative inverse of 3 (and vice versa).\n\nThis makes 2 = 1/3, or 3 = 1/2\n\nAll in the field of integers mod 5 of course.\n\nOn the other hand, of you take the integers mod 4, this gives you the\nnumbers\n0\n1\n2\n3\n\nnow 2 doesn't have a multiplicative inverse, because there is no element of\nthe set which would give the result \"1\" when multiplied with 2.\n0*2 = 0\n1*2 = 2\n2*2 = 0\n3*2 = 2\n\nThis (and that there can be two non-zero numbers which have the product \"0\")\nmean that the integers mod 4 are not a field ('cause every element of a\nfield must have a multiplicative inverse).\n\ngreetings, Florian Pflug\n\nPS: I hope you understand what I mean with \"multiplicative inverse\". I have\n_absolutly_ no idea how this is called in english - it's \"multiplikatives\ninverses\" in german, and I tried to translate it ;-))\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92304325,"math_prob":0.94414955,"size":1570,"snap":"2020-45-2020-50","text_gpt3_token_len":444,"char_repetition_ratio":0.14942528,"word_repetition_ratio":0.013201321,"special_character_ratio":0.2923567,"punctuation_ratio":0.0990991,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99457765,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T02:25:47Z\",\"WARC-Record-ID\":\"<urn:uuid:d0ab3a36-dd3f-4c46-a7ba-a7347196c2d1>\",\"Content-Length\":\"6055\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:906dcf45-80b4-4bb3-9785-6a8a711b2298>\",\"WARC-Concurrent-To\":\"<urn:uuid:28853c85-2d93-485f-91dd-4cf7ecb838ab>\",\"WARC-IP-Address\":\"133.44.98.95\",\"WARC-Target-URI\":\"http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/18773\",\"WARC-Payload-Digest\":\"sha1:SSSQNAD4BU56YFPPJV47BZFPD63UDUQY\",\"WARC-Block-Digest\":\"sha1:OG3AG3Y4ARYJZ4ERUB5QJ3YVPYG2HTPF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141717601.66_warc_CC-MAIN-20201203000447-20201203030447-00678.warc.gz\"}"}
https://chryswoods.com/beginning_c++/syntax.html
[ "# Syntax compared to Python\n\nLike (nearly) all other programming languages, C++ provides syntax for describing comments, loops, conditions and functions. As the concepts are similar to those in Python, we will run quickly through them here now.\n\n## Comments\n\nYou can add comments to a C++ source file using either `//` to comment out a whole line, or enclose blocks of lines between `/*` and `*/`. For example,\n\n``````\n// this line is a comment\n\nint main() // everything after here is a comment\n{\n/* We have a comment here\nthat can span multiple lines\nuntil we see the next...\n*/\n\n// another single line comment\n\nreturn 0;\n}``````\n\nThere are many tools, such as doxygen, which can then extract these comments to help you auto-generate documentation for your C++ program.\n\n## Conditions\n\nAs in Python, C++ conditions provide a way of choosing which code to execute based on whether or a condition is true or false. The syntax for a general C++ condition is;\n\n``````if (condition1)\n{\n//execute this code if condition1 is true\n}\nelse if (condition2)\n{\n//execute this code if condition2 is true\n}\nelse\n{\n//execute this code if neither condition1 or condition2 are true\n}``````\n\nwhere `condition1` and `condition2` are statements that evaluate to `true` or `false`, for example `i < 10`, or `j >= 100`. Example conditions include;\n\n• `i < j` : The value of `i` is less than the value of `j`\n• `i <= j` : The value of `i` is less than or equal to the value of `j`\n• `i == j` : The value of `i` equals the value of `j`\n• `i >= j` : The value of `i` is greater than or equal to the value of `j`\n• `i > j` : The value of `i` is greater than the value of `j`\n• `i != j` : The value of `i` is not equal to the value of `j`\n\nWhile the `if` part of the condition is required, the `else if` and `else` clauses of the condition are optional.\n\nFor example, copy this code into a C++ source file called `condition.cpp`\n\n``````#include <iostream>\n\nint main()\n{\nint i = 35;\n\nif (i < 10)\n{\nstd::cout << \"i is less than 10\" << std::endl;\n}\nelse if (i >= 100)\n{\nstd::cout << \"i is more than or equal to 100\" << std::endl;\n}\nelse\n{\nstd::cout << \"i is somewhere between 10 and 100\" << std::endl;\n}\n\nreturn 0;\n}``````\n\nCompile and run this program using\n\n``````g++ condition.cpp -o condition\n./condition``````\n\nDoes it print what you expect? Try changing the value of `i`. Does the program still behave the way you expect?\n\nC++ uses curly brackets `{ }` to group lines of code together into blocks. This is different to Python, which uses indentation. The indentation in the above examples is therefore not required. However, it is conventional to indent code blocks like this, as it makes them easier to read. For example, here is the above `condition.cpp` rewritten using multiple lines of code per lines of text, and not using indentation of the code blocks… As I hope you see, while it is correct C++, it is pretty unreadable!\n\n``````#include <iostream>\n\nint main(){ int i = 35; if (i < 10){ std::cout << \"i is less than 10\" << std::endl;}\nelse if (i >= 100){\nstd::cout << \"i is more than or equal to 100\" << std::endl;}\nelse { std::cout << \"i is somewhere between 10 and 100\" << std::endl;} return 0;}``````\n\n## Loops\n\nAs in Python, a C++ loop is a way of describing a block of code that will be executed multiple times. The syntax for a general C++ loop is;\n\n``````for ( initialise ; condition ; increment )\n{\n//code that should be run\n}``````\n\nwhere;\n\n• the block of code to the run is specified within the `{ }` curly brackets,\n• `initialise` initialises the counter of the loop,\n• `condition` is a condition that tests whether or not the loop should continue. This should return `true` (perform another iteration) or `false` (stop now),\n• and `increment` increases (increments) the counter after every loop iteration.\n\nFor example, the following C++ source `loop.cpp` loops from 1 to 10, printing out the five times table for each iteration of the loop.\n\n``````#include <iostream>\n\nint main()\n{\nfor (int i=1; i<=10; i=i+1)\n{\nstd::cout << i << \" times 5 equals \" << (i*5) << std::endl;\n}\n\nreturn 0;\n}``````\n\nCopy the above into a file called `loop.cpp` and compile and run using\n\n``````g++ loop.cpp -o loop\n./loop``````\n\nYou should see the five times table printed out.\n\nAs another example, the following C++ source `countdown.cpp` counts down from 10 to 1, before printing “Lift off!”.\n\n``````#include <iostream>\n\nint main()\n{\nfor (int i=10; i>0; i=i-1)\n{\nstd::cout << i << std::endl;\n}\n\nstd::cout << \"Lift off!\" << std::endl;\n\nreturn 0;\n}``````\n\nAgain, copy the above into a file called `countdown.cpp` and compiled and run using\n\n``````g++ countdown.cpp -o countdown\n./countdown``````\n\nNote that many C++ programmers use shorthand notation for the increment part of the loop. For example;\n\n• `i = i + 2` can be shorthanded as `i += 2`\n• `i = i + 1` can be shorthanded as `i += 1`, or even better `i++` or `++i`\n• `i = i - 2` can be shorthanded as `i -= 2`\n• `i = i - 1` can be shorthanded as `i -= 1`, or even better `i--` or `--i`\n\nAlso note that while it is conventional to use `i` as the name of the loop variable, it is not required. You can use any name you wish.\n\nCopy the following code into the C++ source file `loops.cpp`;\n\n``````#include <iostream>\n\nint main()\n{\nstd::cout << \"Loop 1\" << std::endl;\nfor (int i=0; i<10; ++i)\n{\nstd::cout << i << std::endl;\n}\n\nstd::cout << \"Loop 2\" << std::endl;\nfor (int i=500; i>0; i -= 100)\n{\nstd::cout << i << std::endl;\n}\n\nstd::cout << \"Loop 3\" << std::endl;\nfor (int puppy=30; puppy<=100; puppy += 5)\n{\nstd::cout << puppy << std::endl;\n}\n\nstd::cout << \"Loop 4\" << std::endl;\nfor (int i=1; i<=3; ++i)\n{\nfor (int j=1; j<=3; ++j)\n{\nstd::cout << (i*j) << \" \";\n}\n\nstd::cout << std::endl;\n}\n\nreturn 0;\n}``````\n\nWhat do you think this program will do? Compile and run using\n\n``````g++ loops.cpp -o loops\n./loops``````\n\nDid you see printed to the screen what you expected?\n\nPlay with this code by changing the initiase, condition and increment parts of the loop and see that it does what you expect.\n\nAs you would expect, you can nest one loop inside another, and you can also nest conditions inside loops (or loops inside conditions). For example;\n\n``````#include <iostream>\n\nint main()\n{\nint n = 42;\n\nif (n < 0)\n{\nstd::cout << n << \" is negative.\" << std::endl;\n}\nelse if (n > 100)\n{\nstd::cout << n << \" is large and positive.\" << std::endl;\n}\nelse if (n == 10)\n{\nfor (int i=10; i>0; i-=1)\n{\nstd::cout << i << \"...\" << std::endl;\n}\nstd::cout << \"Blast off!\" << std::endl;\n}\nelse if (n == 42)\n{\nstd::cout << \"The answer to life, the universe and everything!\" << std::endl;\n}\nelse\n{\nstd::cout << \"What is \" << n << \"?\" << std::endl;\n}\n\nreturn 0;\n}``````\n\nWhat will this program do for different values of `n`?\n\n## Functions\n\nAs in Python, C++ provides a way of packaging regularly used code into functions. The syntax for a general C++ function is;\n\n``````return_type function_name( arguments )\n{\n//do something\n\nreturn return_value;\n}``````\n\nwhere\n\n• `function_name` is the name of the function,\n• the code to execute within the function is held within the curly brackets,\n• arguments to the function are supplied in `arguments`,\n• `return_type` specifies the type of the value returned by the function, and\n• `return` is used to return a value from the function.\n\nNote that you can have as many or few arguments as you want (including zero arguments), but you must specify the type of each argument. You can also have a function that returns nothing by specifying the return type as `void`.\n\nFor example, this function returns the sum of two integers;\n\n``````int sum( int a, int b )\n{\nint c = a + b;\nreturn c;\n}``````\n\nwhile this function just prints the passed string (returning nothing),\n\n``````void print( std::string s )\n{\nstd::cout << s << std::endl;\n}``````\n\nwhile this function calculates the square of a double,\n\n``````double square( double x )\n{\nreturn x * x;\n}``````\n\nwhile this function joins two strings together with a space,\n\n``````std::string join( std::string a, std::string b )\n{\nreturn a + \" \" + b;\n}``````\n\nwhile this function just prints hello world…\n\n``````void print_hello()\n{\nstd::cout << \"Hello World\" << std::endl;\n}``````\n\nYou can put as many lines of code as you want within the function. However, for readability, it is worth considering breaking large functions (more than 100’s of lines of code) into a set of smaller, more readable functions.\n\nYou can call a function by using its name, and by passing arguments within round brackets `( )`.\n\nFor example, copy the below code into the C++ source file `functions.cpp`\n\n``````#include <iostream>\n\nint sum( int a, int b )\n{\nint c = a + b;\nreturn c;\n}\n\nvoid print( std::string s )\n{\nstd::cout << s << std::endl;\n}\n\ndouble square( double x )\n{\nreturn x * x;\n}\n\nstd::string join( std::string a, std::string b )\n{\nreturn a + \" \" + b;\n}\n\nvoid print_hello()\n{\nstd::cout << \"Hello World\" << std::endl;\n}\n\nint main()\n{\nstd::cout << \"5 + 3 equals \" << sum(5,3) << std::endl;\n\nprint(\"Hello from a function!\");\n\nstd::cout << \"The square of 3.5 is \" << square(3.5) << std::endl;\n\nstd::cout << join(\"Hello\", \"World\") << std::endl;\n\nprint_hello();\n\n//you can pass the return value from one function as the argument\n//of another, e.g.\nprint( join(\"Hello\", join(\"from\", \"C++\")) );\n\nreturn 0;\n}``````\n\nWhat do you think will be printed by this program? Compile and run using\n\n``````g++ functions.cpp -o functions\n./functions``````\n\nDid you see what you expected?\n\nOne problem you may encounter is that you can only use a function in a file after it has been declared. For example, copy the below code into `broken.cpp`;\n\n``````#include <iostream>\n\nint main()\n{\nprint_hello();\nreturn 0;\n}\n\nvoid print_hello()\n{\nstd::cout << \"Hello World!\" << std::endl;\n}``````\n\nand then try to compile using\n\n``g++ broken.cpp -o broken``\n\nYou should see that you get an error saying that `print_hello` is undeclared on line 5. For example, the error I get is;\n\n``````broken.cpp:5:5: error: use of undeclared identifier 'print_hello'\nprint_hello();\n^\n1 error generated.``````\n\nThis is because the C++ compiler will read the code from the top of the file to the bottom. This means that it has not seen the definition of the function `print_hello()` on line 9 when it sees a call to the function on line 5. As it doesn’t know what `print_hello()` is on line 5, the compiler exits with an error.\n\nOne way to solve this is to write all function definitions above wherever they are called, e.g. we could move the definition of `print_hello()` above the function call;\n\n``````#include <iostream>\n\nvoid print_hello()\n{\nstd::cout << \"Hello World!\" << std::endl;\n}\n\nint main()\n{\nprint_hello();\nreturn 0;\n}``````\n\nThis fixes the problem in this specific case. However, it can sometimes be difficult, or even impossible, to define a function before it is called. To solve this problem, C++ allows you to declare a function separately from where it is defined. Declaring a function means specifying its name, arguments and return type. For example, copy the below into a new source file called `fixed.cpp`\n\n``````#include <iostream>\n\n// declare the function\nvoid print_hello();\n\nint main()\n{\nprint_hello();\nreturn 0;\n}\n\n// define the function\nvoid print_hello()\n{\nstd::cout << \"Hello World!\" << std::endl;\n}``````\n\nCompile and run using\n\n``````g++ fixed.cpp -o fixed\n./fixed``````\n\nYou should see that the program is fixed, and outputs “Hello World!” to the screen.\n\nNote that you can declare a function as many times as you want. You can only define a function once (this is known as the “one definition rule”). The name, argument types and return types of the declaration and definition must all match. For example, this is incorrect;\n\n``````#include <iostream>\n\n//declared with two arguments of type 'int'\nint sum(int a, int b);\n\nint main()\n{\nstd::cout << sum(10, 20) << std::endl;\n}\n\n//definition uses different types to the declaration\ndouble sum(double a, double b)\n{\nreturn a + b;\n}``````\n\nwhile this is the correct version\n\n``````#include <iostream>\n\n//Declared with two arguments of type 'int'\nint sum(int a, int b);\n\n//Declared with two arguments of type 'double'\ndouble sum(double c, double d);\n\n//Note that it is ok to declare the function twice, even\n//using different argument names (as long as the type\n//are correct) indeed, you don't have to specify the\n//argument names.\ndouble sum(double, double);\n\nint main()\n{\nstd::cout << sum(10, 20) << std::endl;\n}\n\n//Definitions must use the same types as the declaration,\n//and must provide argument names since they are referenced\n//in the body of the function. Unlike the type, names don't to\n//match those used in the declarations (although it's good\n//practice to keep them consistent when names are used.)\n\n//Defined with two arguments of type 'int'\nint sum(int c, int d)\n{\nstd::cout << \"Sum of two ints is: \";\nreturn c + d;\n}\n\n//Defined with two arguments of type 'double'\ndouble sum(double a, double b)\n{\nstd::cout << \"Sum of two doubles is: \";\nreturn a + b;\n}``````\n\nTry changing the type of the arguments that are passed to the sum function in main to see what happens, e.g. `sum(10.0, 20.0)`. What happens when you mix the type of the arguments?\n\n### Multi-file programs\n\nBecause you can only have one definition of a function, but you will likely want multiple declarations of the function, it is common to split the definitions and declarations into different files.\n\n• `.cpp` files : These should contain all of your function definitions and main code\n• `.h` files : These should contain all of your function declarations.\n\nThe `.h` files are called “header files”. They are used to hold all of your function declarations together. For example, create three files, `sum.cpp`, `sum.h` and `main.cpp`. Into these files copy the below code;\n\nInto `sum.h` type,\n\n``````#ifndef _SUM_H\n#define _SUM_H\n\n/* Function to return the sum of\nthe two arguments */\ndouble sum(double a, double b);\n\n#endif``````\n\ninto `sum.cpp` type,\n\n``````#include \"sum.h\"\n\ndouble sum(double a, double b)\n{\nreturn a + b;\n}``````\n\nand into `main.cpp` type,\n\n``````#include <iostream>\n\n#include \"sum.h\"\n\nint main()\n{\nstd::cout << sum(10,20) << std::endl;\n}``````\n\nTo compile and run the program type\n\n``````g++ main.cpp sum.cpp -o sum\n./sum``````\n\nNote that you need to supply all of the `.cpp` files required, but do not need to include the `.h` file. This is because the `#include \"sum.h\"` in the two `.cpp` files copies and pastes the contents of `sum.h` into both of these files.\n\nHeader files are very useful, as they allow you to collect all of your declarations together, and `#include` them into your source files without having to continually type them out again and again. This reduces typing and reduces errors.\n\nOne problem with header files is that they should only be included once in a source file. As this is difficult to achieve in practice, “header guards” are used to check if a header file has been included more than once, and to then stop the second (or subsequent) includes.\n\nFor example, the header guards in `sum.h` mean that this code is ok;\n\n``````#include <iostream>\n\n//included the first time - we want this inclusion\n#include \"sum.h\"\n\n//included by mistake the second time - we don't want this inclusion\n#include \"sum.h\"\n\nint main()\n{\nstd::cout << sum(10,20) << std::endl;\n}``````\n\n“Header guards” are the lines `#ifndef _SUM_H`, `#define _SUM_H` and `#endif` that appear in `sum.h`.\n\nThe first time `sum.h` is included using `#include \"sum.h\"`, the `#ifndef _SUM_H` in `sum.h` evaluates to `true`, and so all of the code between that line and `#endif` is copied and pasted into the source file (`#ifndef` means “if not defined then”). The first of these lines is `#define _SUM_H`, which sets `_SUM_H` equal to 1. This means that the second time `sum.h` is included, the `#ifndef _SUM_H` evaluates to `false` (as `_SUM_H` is now defined), and so none of the lines between here and `#endif` are copied and pasted.\n\nThis is an inelegant way of solving the multi-include problem, and is a legacy of C++ being developed from C. The commands `#include`, `#ifndef`, `#define` and `#endif` are in the “C Preprocessor language” (`cpp`). All C++ compilers will preprocess C++ files using `cpp` before they are compiled, meaning that you can use “C preprocessor directives” such as `#include` and `#ifndef` in your C++ source file. If you want to learn more about `cpp` then look here.\n\nIn summary, for larger programs you should separate declarations into header files and definitions into source files. Header files should be protected with header guards. Finally, to make it easier to find, it is worth placing the `main` function into a source file called `main.cpp`.\n\n# Exercise\n\nCreate a C++ source file called `main.cpp` and copy into it;\n\n``````#include <iostream>\n\n#include \"timestable.h\"\n\nint main()\n{\nstd::cout << \"Five times table\" << std::endl;\ntimestable(5,10);\n\nstd::cout << \"Twelve times table\" << std::endl;\ntimestable(12,10);\n\nreturn 0;\n}``````\n\nCreate a header file called `timestable.h` in which you will write the declaration of the `timestable` function. This function should take two arguments; the times table to print, and the maximum number to reach (e.g. here we go to a maximum of 10 x 5 and 10 x 12).\n\nCreate a source file called `timestable.cpp` in which you will write the definition of the `timestable` function. This should print out the required timestable.\n\nOnce you have written the functions, compile and run using\n\n``````g++ main.cpp timestable.cpp -o timestable\n./timestable``````\n\nIf you get stuck, you can look at example solutions here.\n\n# PreviousUpNext\n\nQuick Links | Home | Courses | Software | Contact" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7991404,"math_prob":0.8670691,"size":16692,"snap":"2021-21-2021-25","text_gpt3_token_len":4361,"char_repetition_ratio":0.16371045,"word_repetition_ratio":0.13963516,"special_character_ratio":0.28834173,"punctuation_ratio":0.174585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99574816,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-16T16:12:38Z\",\"WARC-Record-ID\":\"<urn:uuid:a78abbdb-ec2f-474c-a65e-11a43242126d>\",\"Content-Length\":\"104928\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:919817dd-4292-4d6c-af09-2af9a25e5331>\",\"WARC-Concurrent-To\":\"<urn:uuid:8edd0ac2-4834-412e-a429-fcc557a4dcc1>\",\"WARC-IP-Address\":\"5.153.219.41\",\"WARC-Target-URI\":\"https://chryswoods.com/beginning_c++/syntax.html\",\"WARC-Payload-Digest\":\"sha1:MWPMVYERIM2SSRN4MGOV7PRG666SAGCJ\",\"WARC-Block-Digest\":\"sha1:K3T4V2HVR7MZSW5AUA4JLUAQWP7WGWCU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487625967.33_warc_CC-MAIN-20210616155529-20210616185529-00571.warc.gz\"}"}
https://app.oncoursesystems.com/school/webpage/11518121/1218309
[ "page contents\n\nChapter 1\n\nRepresent, count, and write numbers 0-5.\n\nChapter 2\n\nCompare Numbers to 5.\n\nsame.greater than.less than.matching sets.\n\nChapter 3\n\nRepresent, Count, and Write Numbers 6 to 9.\n\nChapter 4\n\nRepresent and Compare Numbers to 10.\n\nChapter 5\n\nChapter 6\n\nSubtraction.\n\nChapter 7\n\nRepresent, Count, and Write Numbers 11-19.\n\nChapter 8\n\nRepresent, Count, and Write 20 and Beyond.\n\nChapter 9\n\nIdentify and Describe Two-Dimensional Shapes.\n\ncircle.square.triangle.rectangle.hexagon.\n\nChapter 10\n\nIdentify and Describe Three-Dimensional Shapes.\n\nsphere.cube.cylinder.cone.\n\nChapter 11\n\nMeasurement.\n\nlength.height.weight.\n\nChapter 12\n\nClassify and Sort Data.\n\nby color.by shape.by size." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.615249,"math_prob":0.48206323,"size":707,"snap":"2019-51-2020-05","text_gpt3_token_len":188,"char_repetition_ratio":0.2062589,"word_repetition_ratio":0.02173913,"special_character_ratio":0.23620933,"punctuation_ratio":0.25490198,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.957394,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T10:24:37Z\",\"WARC-Record-ID\":\"<urn:uuid:82c40bcd-4c76-4241-93fc-be96a6e5f048>\",\"Content-Length\":\"12917\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fad29a7e-3d14-4c6d-9591-65c1b4cabd0b>\",\"WARC-Concurrent-To\":\"<urn:uuid:f68d8d86-5bc5-411e-b9e1-58a900dc08f5>\",\"WARC-IP-Address\":\"199.83.132.192\",\"WARC-Target-URI\":\"https://app.oncoursesystems.com/school/webpage/11518121/1218309\",\"WARC-Payload-Digest\":\"sha1:L4FWWFINP7SKOPAOUBWGH66O5VDM3H7J\",\"WARC-Block-Digest\":\"sha1:3L3O4GE5TWXCCPAHUHIKC7QK4FBQYBAV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540497022.38_warc_CC-MAIN-20191207082632-20191207110632-00401.warc.gz\"}"}
http://www.marco-learningsystems.com/pages/sawyer/unbelieve.html
[ "[HomePage] Professor W.W. Sawyer\n\nMore advanced Mathematics applied to give fundamental Insight.\n\nThe Importance of the Unbelievable\n\nWarwick Sawyer\n\nIn a lyrical moment Gottfried Leibniz remarked that 'the Divine Spirit found a sublime outlet in that wonder of analysis, that portent of the ideal world, the amphibian between being and not being, which we call the imaginary root of negative unity.' But are 'imaginary' numbers really any more imaginary than 'real' numbers?\n\nThe enjoyment of a play or a film is said to depend upon the suspension of disbelief. In the development of mathematics also there are times when progress depends upon the acceptance of an idea that appears absurd or impossible.\n\nNegative numbers were at one time such an idea. The common sense view reveals itself in sayings such as '\nI couldn't care less'. The amount of something cannot be less than nothing. For many years mathematicians went along with this view. They did not accept a negative number as a value for a symbol or as the solution of an equation. If asked to solve x + 1 = 0 they would say that no solution existed.\n\nIt would be immensely inconvenient if this view had prevailed. In graphical work, for instance, we would need a different coordinate system for each quadrant of the plane. The equation of a line or curve would change as it passed from one quadrant to another (Figure 1). If we were seeking a point that satisfied certain conditions, we would have to try each quadrant in turn until we found one that worked. We would not be able simply to call the point (x, y) and at the end let the signs tell us where it lay.", null, "Impossible Number?\n\nAfter negative numbers and the usual rules for calculating with them had been accepted, a new impossibility appeared.\n\nSquares cannot be negative. The equation x2 + 1 = 0 has no solution. But, as the years passed, cracks began to appear in this wall of impossibility.\n\nFor example, in the 16th century Italian mathematicians found a formula for solving cubic equations, similar to the usual formula for quadratics, but more complicated. Sometimes this formula gave baffling results. For instance, it was easy to check that x = 4 is a solution of the cubic equation x3 = 15x + 4. But the formula for solving cubics gave", null, "which did not seem to make any sense. In 1572 Raphael Bombelli - after some hesitation - maintained that the strange formula (1) did in fact yield x = 4. To see why, let us use modern notation, in which the symbol i is introduced to mean\n-1. I am going to assume that you have met the idea of i before, and know how to calculate with complex numbers and how to represent them as points on the plane; but I want you to put yourself in the position of an ancient mathematician who has never encountered such an outlandish idea before.\n\nThe symbol i cannot denote a number in the usual sense, but we can ignore this problem and explore the implications, just as even earlier mathematicians accepted that the symbol -1 did not denote a number in what was the usual sense at the time. What they did for the equation x + 1 = 0, we will do for x\n2+ 1 = 0. We will assume that the symbol i can be manipulated according to the standard laws of algebra, together with the rule i2 = -1.\n\nThen (11i)2 = 112i2 = 121(-1) = -121, so \\/-121 = 11i.\n\nThus Bombelli's strange formula (1) becomes", null, "A little experimentation leads to the following calculation:\n\n(2 + i)\n3 = 23 + 3.22i + 3.2.i2 + i3\n= 8 + 12i - 6 - i\n= 2 + 11i\n\nSo it is sensible to maintain that", null, "Similarly", null, "Substituting (3) and (4) into (2) we get\n\nx = 2 + 2i + 2 - i = 4\n\nIn other words, provided we accept that there is some kind of 'number' i which satisfies i2 = -1, then we can extract the correct answer x = 4 from the formula (1).\n\nAn idea that gives wrong answers is clearly nonsense; hut an idea that looks silly but gives correct answers might possibly make some kind of sense if it was looked at in the right way. For more than two centuries after this, faith in the impossible number i -1 gradually grew. Although no one could defend it logically, it kept giving correct results.\n\nThe most dramatic advance was made by Roger Cotes in Cambridge in around 1715. He started with the three infinite series for the exponential, sine, and cosine functions:", null, "The symbols in the series for e\nx can be cut out and placed over the same symbols in the other two series. However, a few minus signs have to he thrown in. More formally, we can express this as the equation\n\neix= cos x + i sin x .......... (5)\n\nThe ix makes the signs work out properly. What makes this so striking is that it brings together two parts of mathematics that seem totally separate. We meet sines and cosines in the framework of geometry, involving the lengths and angles of right-angled triangles. The background of\nx is entirely different. Expressions involving indices occur in algebra, and the number e is usually first met in calculus. (See MATHEMATICS REVIEW Vol 1 no. 5) for an article about e.) There is no hint that the two topics are ever likely to merge.\n\nHowever, when they do merge it is very welcome. The algebra of powers of e is relatively easy - it depends only on the laws of indices\n\nemen = em+n .......... (6)\n\n(em)n = emn .......... (7)\n\nThe formulas for sin(A + B) or cos (A + B) are much more complicated. However, if we accept the existence of that elusive 'number' i, then we can use (1) to derive trigonometric formulas in a very simple way.\n\nFor example, the law of indices (6) tells us that", null, "These are two very basic and very useful results which you will have met before. But you probably have not seen them derived using just algebra before. Again this illustrates how useful i is, and that it seems to keep giving correct answers.\n\nFormulas (10) and (11) are similar enough to be confusing. I have found that the best way to memorise a result is to derive it every time I require it. At first this is a slow process, But after I have performed it several times a channel seems to become carved in the brain. Part way through the work, the conclusion flashes into the mind. With repetition this tends to happen more and more quickly.\n\nSome Applications\n\nWhat we have seen so far certainly suggests that introducing a new kind of number (i) can be useful for mathematics. But does it have any use in the real world? The same problem arose when mathematicians first invented -1 - its applications turned out to include temperatures (when a negative value indicates a temperature below freezing), banking (when a negative value indicates an amount owing), and, as mentioned at the beginning, graphical work (when different signs yield different quadrants in the plane).\n\nOnce we have the 'number' i, we must also have numbers like 2i, 3 - 2i, and so on. Indeed if x and v are any 'real' numbers (including fractions and decimals but not \\/- 1) then we must consider x + iy to be a 'number' on the same footing as i itself. We call x + iy a complex number and denote it by z:\n\nz= x+iy\n\nWe call x the\nreal part of z and y the imaginary part of z. The word 'complex' here does not mean that z is complicated; it just means that z is composed of several parts (namely x and y). Similarly 'real' and 'imaginary' do not have their usual meanings; they just indicate that 'real' numbers are ordinary numbers whereas 'imaginary' ones are those that involve the new type of number i. It is not such a bad name - 'imaginary' numbers are a product of the mathematician's imagintion. (In the modern view, so are 'real' numbers!) You may remember that, just as we can think of the real number x as lying on a line, we can represent the complex number x + iy by the point (x,y) in a plane. This is often called the Argand diagram, after the French mathematician, Jean-Robert Argand, who described it in 1806. It was also invented by a Dane called Caspar Wessel, nine years earlier, and by the German, Carl Friedrich Gauss in 1811. Indeed, the basic idea is present in the work of the Englishman John Wallis in 1673. To avoid an international incident we will simply call it the complex plane.\n\nRemember: you do algebra with complex numbers in exactly the same way as with real numbers, but remembering that i2 = -1. For example", null, "Notice that we started with a function of z, namely f(z) = z2, and we emerged with two functions of x and y, namely the real and imaginary parts x2-y2 and 2xy.\n\nWe can use these two functions to define two families of curves:\n\nx2-y2 = c\n\n2xy = k\n\nwhere\nc and k are constants. If you sketch these, for various values of c and k, you will get two families of hyperbolas (Figure 2). Moreover, all the curves in one family cross all those in the other at right angles.", null, "Amazingly, the same is true if we use any other function of z. For example, with\n\nf(z) = z3 = (x+iy)3 = (x3-3xy2)+ i(3x2y-y3)\n\nwe get the families x3-3xy2 = c and 3x2y-y3 = k. With some expenditure of effort you can sketch these families and check the right angle property. A much simpler case is when f(z) = z, and the curves are x = c and y = k. These are just two families of straight lines parallel to the two axes: obviously these cut at right angles (Figure 3).", null, "This diagram can be interpreted physically in a variety of ways.\n\n(1) The dotted lines could represent contours of a plane, tilted at an angle to the horizontal. Liquid flows down the plane at a velocity proportional to the gradient. It moves along paths indicated by the unbroken lines.\n\n(2) A plane is a possible shape for a membrane or soap film stretched on a suitable frame (say a square). The dotted lines could represent contours specifying this shape.\n\n(3) The unbroken lines could represent electric current flowing through a copper sheet, and the dotted lines the corresponding equipotentials.\n\n(4) Equally, it might show lines of force and equipotentials in a uniform magnetic field.\n\nSimilar physical interpretations are valid for the families associated with any complex function, not just f(\nz) = z; so there are applications to fluids, soap films, electricity, and magnetism. Nothing 'imaginary' about those!\n\nBecause they are described by the same mathematical idea, these situations have so much in common that we can sometimes draw on one of them to illuminate another. For instance, the effect of height in relation to gravity can provide an analogy useful for thinking about electricity or magnetism.\n\nNotice that the physical interpretation of a complex function such as z2, is in terms of two physical properties. That makes sense, because every 'complex' number is assembled from two components; its real and imaginary parts. Naturally any interpretation in the real world will involve two separate quantities. One advantage of the complex notation is that it enables us to consider both quantities simultaneously.\n\nLet us try something a little more ambitious - the function f(z) = l/z. We have to find the real and imaginary parts of l/(x+ iy). There is a nice trick for achieving this, which is to observe that (x + iy)(x - iy) = x2 + y2, which is real. So we multiply numerator and denominator in l/(x + iy) by x - iy, and deduce that", null, "", null, "Down the Plughole\n\nWe can get an approximate realisation of this pattern as a flow of water if we think of the kitchen sink; the black circle is the outlet and the white circle a jet of water directed near it. Alternatively, we can imagine a copper sheet with electric current entering at one point and leaving at another point very close to it.\n\nThe diagram also suggests the lines of force of a magnet. Actually, magnetic lines of force are not circles in the real world; but they would be in a two-dimensional world, and this is a very fruitful idea. Figure 4 therefore describes a space that is empty except for a magnet at the origin.\n\nThe function l/\nz from which we started behaves in a special way at the origin - it becomes infinite. So we conclude that an infinity (or singularity) in a function corresponds to a point at which an active element, such as a magnet, is placed. Now the active elements determine what happens everywhere. So a complex function should in some sense be completely determined by its singularities.\n\nWe can take the idea that the singularities of a complex function are important and apply it to series. We know that some degree of care is needed when handling infinite series. For example, summing a geometric progression leads to the equation", null, "This is perfectly reliable if\nx lies between -1 and 1; but if we take, say, x = 3, then we get the absurd result", null, ".\n\nIt is not very surprising that things go wrong if x goes past the value 1, because", null, "becomes infinite when x = 1. However, we get no such warning in the case of the equation", null, "which has the gentle and innocuous graph shown in Figure 5.", null, "The value of the function never goes below 0 or above 1, and the denominator never vanishes. Despite this, we still hit trouble if we try x = 3: it gives the result", null, "Although there are methods for testing when a series of real numbers makes good sense (that is, it is\nconvergent), they involve quite a lot of calculation. In contrast, if we look at this question from the point of view of complex numbers, there is a very simple test. The corresponding complex function", null, "does have values of z for which the denominator vanishes (and the value becomes infinite), namely z = i and z = - i. If we draw these points in the complex plane, using (x,y) to represent x+iy as usual, we get Figure 6. The points z = ± i, at which infinite values of the function occur, are its singularities.", null, "In the world of complex numbers there is a very simple test for convergence of a power series. Draw any circle, with its centre as the origin, that does not contain any singularity of the function. Then the series converges for all z inside that circle. If z is further from the origin than some singularity, then the series definitely does not converge for that value of z.\n\nFor", null, "we can take any circle of radius less than 1, but we expect problems of non-convergence outside the circle of radius I. In particular, we expect trouble when z = 3. (Note that when z is real, then z = x + i0 = x; its real part. So z = 3 is the point x = 3, y = 0.)\n\nThis is much more satisfying than some intricate calculation. It says that the series behaves badly because the function it represents behaves badly. Indeed, much more is true. In general, infinite series have to be handled very carefully. There are many things that look quite reasonable - changing the order of the terms, or differentiating or integrating the series term by term - that can sometimes lead to incorrect conclusions. There is a blanket theorem, which I have never seen stated in quite this way in any textbook:\n\nWithin a circle centred at the origin, not containing any singularity, you can safely carry out any operation on the power series that might occur to a sane mathematics student.\n\nWarwick Sawyer retired from the University of Toronto in 1976. Previously he taught mathematics at university level in Britain, New Zealand and the USA. From 1948 to 1950 he was the first Head of Mathematics at the Universitv of Ghana. He has written twelve books, including Mathematician's Delight and Prelude to Mathematics. Their aim is to enable scientists, engineers and the general public to use any mathematics they might need, with understanding and without anxiety.\n\nThis article first appeared in the November 1991 issue of Mathematical Review.\n\nVersion: 22nd March 2001\n\nBack\n\n [HomePage] Professor W.W. Sawyer" ]
[ null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/fig1.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p2-1.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p2-2.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p2-3.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p2-4.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p3-1.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p3-2.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p3-3.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/fig2.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/fig3.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p3-4.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/fig4.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p5-1.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p5-4.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p5-5.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p5-2.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/fig5.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p5-3.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p5-6.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/fig6.gif", null, "http://www.marco-learningsystems.com/images/sawyer/unbelieve/p5-6.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93917084,"math_prob":0.9848715,"size":15343,"snap":"2021-43-2021-49","text_gpt3_token_len":3608,"char_repetition_ratio":0.10965513,"word_repetition_ratio":0.0035752591,"special_character_ratio":0.2326794,"punctuation_ratio":0.10359477,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99509686,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,4,null,2,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T20:55:53Z\",\"WARC-Record-ID\":\"<urn:uuid:ba0fa64a-ed19-4d0c-8fc0-52201d9b8fa5>\",\"Content-Length\":\"37197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc45d703-d0b4-4501-bff2-4ac2f953d06d>\",\"WARC-Concurrent-To\":\"<urn:uuid:1e6eed00-8cb2-4fd8-9b0a-33665a8610de>\",\"WARC-IP-Address\":\"217.160.0.192\",\"WARC-Target-URI\":\"http://www.marco-learningsystems.com/pages/sawyer/unbelieve.html\",\"WARC-Payload-Digest\":\"sha1:3PS6UI5WXMROOD6EPQPSH6OS4IOFWR7J\",\"WARC-Block-Digest\":\"sha1:YEG6JURZGTCYXIBKZDI2VKRP7II6VGW4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585025.23_warc_CC-MAIN-20211016200444-20211016230444-00692.warc.gz\"}"}
https://studypooldev.com/discuss/8162179/complete-the-multiple-choice-questions-correctly
[ "", null, "# complete the multiple choice questions correctly.", null, "Anonymous\ntimer Asked: Jul 28th, 2018\naccount_balance_wallet \\$35\n\n### Question Description\n\n• What are some of the unique advantages to using a case-crossover design in environmental epidemiology?\n• Which study design is not feasible with rare exposures?\n• What study design may be appropriate when it is not possible to estimate an effect on the individual level?\n• Which of the following best defines external validity?\n• Ensuring a valid study is most often determined at what stage?\n• The design stage\n• The analysis stage\n• The interpretation stage\n• The generalization stage\n• Analytic statistics are used to assess the distribution of data.\n• Which of the following best describes the null hypothesis?\n• Which of the following best describes statistical inference?\n• Reliability refers to the consistency of the result.\n• Incidence is a better measure of disease risk than prevalence?\n• Prevalence is influenced by which of the following?\n• What type of measure is appropriate for assessing the association between two dichotomous variables in a cohort study?\n• Which of the following statistics indicates the percentage of the disease in the population that can be attributed to the exposure?\n• Which of the following statistics reflects the excess risk of disease among the exposed group attributed to the exposure?\n• What type of measure is appropriate for assessing the association between two dichotomous variables in a case-control study?\n• Which of the following influence the width of the confidence interval for a relative risk or odds ratio?\n• Which type of regression model is most appropriate to use when the outcome variable is categorical?\n• Automatically controls for confounding from time-related factors\n• Yields incidence and prevalence data\n• Effective at studying the effects of short-term exposures on the risk of acute events.\n• Requires less time, money, and size\n• Cross sectional\n• Case control\n• Cohort\n• Two of the above\n• Case series\n• Cohort\n• Ecologic\n• Experimental\n• That component of accuracy reflecting the level of systematic error in the study.\n• The extent the results of a study are relevant to people who are not part of the study (representativeness).\n• The extent the results of a study are not attributable to bias or confounding.\n• Two of the above.\n• True\n• False\n• A best guess formulated using a statistic\n• What is currently believed, the status quo\n• An informal basis for a statistical test of association.\n• Drawing conclusions about the population based on a parameter\n• The use of descriptive statistics to draw conclusions about associations\n• The investigator decides between two hypotheses regarding a population, based on sampled data and probability to indicate the level of reliability in the conclusion.\n• True\n• False\n• True\n• False\n• Incidence\n• Mortality\n• Cure\n• Two of the above\n• All of the above\n• Relative risk\n• Risk ratio\n• Rate ratio\n• All of the above\n• Risk ratio\n• Attributable risk\n• Attributable risk percent\n• Population attributable risk\n• Population attributable risk percent\n• Risk ratio\n• Attributable risk\n• Attributable risk percent\n• Population attributable risk\n• Population attributable risk percent\n• Relative risk\n• Risk ratio\n• Rate ratio\n• Odds ratio\n• Level of significance\n• P value\n• Sample size\n• Two of the above\n• All of the above\n• Linear regression\n• Cox proportional hazards\n• Poisson regression\n• Logistic regression\n\n33. Which of the following best defines causal inference?\n\n• A conclusion about the presence of a disease and reasons for its existence.\n• A conclusion about a population based on sampled data.\n• A conclusion about relationships supported by probability.\n• An indication of the level of reliability in the conclusion.\n• True\n• False\n• Consistency\n• Temporality\n• Biological plausibility\n• An experimental study design\n• True\n• False\n• A review of a real health events grouped together in time and location\n• A review of an unusual number of perceived health events grouped together in time and location\n• A review of an unusual number, real or perceived, health events grouped together in time and location\n• Confirm reported disease cases.\n• Determine if there is a higher than expected level of the disease.\n• Identify causal relationships.\n• Two of the above are true.\n• Three of the above are true.\n• True\n• False\n• Point source\n• Continuous source\n• Propagated source\n• Person, place, and time\n• Ecologic, individual, and time\n• Age, period, and cohort\n• Age and time\n• Age effects\n• Cohort effects\n• Period effects\n• Two of the above\n• All of the above\n• Person\n• Place\n• Time\n• Latency period\n• True\n• False\n• True\n• False\n• Cohort\n• Case-control\n• Case-crossover\n• Cross-sectional\n• The definition of environmental epidemiology includes the study of all of the following except:\n• Which of the following best defines the personal versus ambient environment?\n• Exposure information from a diary is an example of an indirect measure of exposure.\n• Accurate assessment of outcome assumes which of the following?\n• Which of the following best defines risk management?\n• What word best represents the following: Anatomic, physiologic, biochemical, or molecular substances that are associated with the presence and severity of specific disease states and are detectable and measurable by a variety of methods including physical examination, laboratory assays and medical imaging.\n• Biomarkers are traditionally used to measure which of the following?\n• What is useful to avoid recall bias?\n• Which of the following best defines study design?\n• What study design is best for answering questions about exposure-disease relationships when the latency period is long?\n• Cross sectional\n• Ecologic\n• Case-control\n• Cohort\n• Experimental\n• Which of the following best defines case-crossover?\n• Which of the following is not an observational study?\n• Experimental study\n• Case-control study\n• Cohort study\n• Ecologic study\n• What are some ways to improve the internal validity of a study?\n• Confounding is a threat to which of the following?\n• Analytic statistics are used to measure and test hypothesized associations.\n• What type of measure is appropriate for assessing the association between two continuous variables in a cross-sectional study?\n• Relative risk\n• Regression slop coefficient\n• Correlation coefficient\n• Two of the above\n• Three of the above\n• Which of the following statistics is used to indicate the percentage of disease cases attributed to their exposure?\n• An important aim of environmental epidemiology is which of the following?\n• What is the best study design for establishing a time sequence of events?\n• It is not necessary to have a complete understanding of the causal factors and mechanisms to develop effective prevention and control measures.\n• An advantage of maps is that they are not influenced by confounding factors that cluster spatially.\n• Which of the following best defines age-adjusted rates?\n• Which of the following study designs is best for establishing a cause-effect relationship?\n• Case study\n• Case-control study\n• Ecologic\n• Cohort\n• Which of the following study designs is best when you want to study the relationship between a single exposure and several possible disease outcomes?\n• Distribution\n• Determinants\n• Application to prevent and control disease\n• Frequency and pattern\n• All of the above are captured in the definition\n• Routes of human exposure to contaminants through solid, liquid, and gaseous environments.\n• The inner body is protected from outside contaminants by three barriers: the skin, the gastrointestinal tract, and the lungs.\n• The avenue or mechanism by which it affects people.\n• An environment where a person has control (e.g., diet) with the ambient environment where they have little or no control (e.g., pollution).\n• True\n• False\n• A standard case definition\n• Adequate reporting\n• Both of the above\n• Neither of the above\n• A tool to integrate exposure and health effects in order to identify the potential health hazards in humans.\n• An array of techniques to measure or estimate whether the exposure poses a threat to health or the ecosystem.\n• Integration of recognized risk, risk assessment, development of strategies to manage risk, and mitigation of risk through managerial resources.\n• Surveillance\n• Monitoring\n• Biomarker\n• Determinant\n• Absorption of an agent\n• Bioaccumulation of metabolite within the body\n• Hazard rate\n• Two of the above are true\n• All of the above are true\n• Meta-analysis\n• Diary records\n• Disease registries\n• A program that directs the researcher along the path of systematically collecting, analyzing, and interpreting results.\n• An approach used in answering questions.\n• An approach that may involve experimental assessment.\n• An approach that involves collection and description of a public health problem.\n• Presence of risk factor(s) for people with a condition is compared with that for people who do not.\n• Exposure frequency during a window immediately prior to an outcome event is compared with exposure frequencies during a control time or times at an earlier period.\n• People are followed over time to describe the incidence or the natural history of a condition. Assessment can also be made of risk factors for various conditions.\n• Examine the relation between the intervention and outcome variables in a cohort of people followed over time.\n• Matching\n• Restriction\n• Multiple regression\n• Two of the above\n• Three of the above\n• Internal validity\n• External validity\n• True\n• False\n• Risk ratio\n• Attributable risk\n• Attributable risk percent\n• Population attributable risk\n• To identify environmentally caused public health problems.\n• To identify reasons for public health problems.\n• To develop effective prevention and control efforts.\n• All of the above\n• Case-control\n• Cross-sectional\n• Cohort\n• Ecologic\n• True\n• False\n• True\n• False\n• Weighted average of the age-specific rates.\n• Proportional age averages of crude rates.\n• Crude rates weighted by age and season.\n• Age standardized populations.\n• Case report\n• Case series\n• Case-control\n• Cohort\n\n71.An observational study may involve which of the following?\n\na. Case-control\n\nb. Cohort\n\nc. Cross-sectional\n\nd. All of the above\n\n72. What is the best study design to assess an accumulative dose?\n\n73. Assessment of reported clusters should include all of the following except?\n\n• Evaluation to determine whether an excess of the health problem has occurred\n• Case evaluation to assure that a biological basis is present\n• Evaluation of some or all of the suspected cases to describe the epidemiologic characteristics. These may be performed in order or concurrently.\n• Identifying information on those supposedly affected\n• All of the above\n\n74. A time-series analysis may involve which of the following?\n\n• Ecologic data\n• Longitudinal data\n• Both of the above\n• Neither of the above\n\nII. Calculation Questions (30 points) (use much space as needed)\n\n• Indicate an appropriate type of epidemiological study design.\n• Enter the data into a 2x2 table\n• Calculate a risk of developing hepatitis A due to having eaten at restaurant X\n• Interpret the data\n• Enter the data into a 2x2 table\n• Calculate a risk of developing Salmonella infection from eating tomatoes\n• Interpret the data", null, "This question has not been answered.\n\nCreate a free account to get help with this and any other question!\n\nSimilar Questions\nRelated Tags", null, "Brown University\n\n1271 Tutors", null, "California Institute of Technology\n\n2131 Tutors", null, "Carnegie Mellon University\n\n982 Tutors", null, "Columbia University\n\n1256 Tutors", null, "Dartmouth University\n\n2113 Tutors", null, "Emory University\n\n2279 Tutors", null, "Harvard University\n\n599 Tutors", null, "Massachusetts Institute of Technology\n\n2319 Tutors", null, "New York University\n\n1645 Tutors", null, "Notre Dam University\n\n1911 Tutors", null, "Oklahoma University\n\n2122 Tutors", null, "Pennsylvania State University\n\n932 Tutors", null, "Princeton University\n\n1211 Tutors", null, "Stanford University\n\n983 Tutors", null, "University of California\n\n1282 Tutors", null, "Oxford University\n\n123 Tutors", null, "Yale University\n\n2325 Tutors" ]
[ null, "https://www.facebook.com/tr", null, "https://studypooldev.com/img/systemavatars/student-pre1.jpg", null, "https://studypooldev.com/img/icons/no_answer.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Brown University.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Caltech.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Carnegie Mellon University.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Columbia University.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Dartmouth University.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Emory.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Harvard University.png", null, "https://studypooldev.com/pictures/icons/schools@2x/MIT.png", null, "https://studypooldev.com/pictures/icons/schools@2x/New York University.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Notre Dame University.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Oklahoma University.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Penn State University.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Princeton University.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Stanford University.png", null, "https://studypooldev.com/pictures/icons/schools@2x/University of California.png", null, "https://studypooldev.com/pictures/icons/schools@2x/University of Oxford.png", null, "https://studypooldev.com/pictures/icons/schools@2x/Yale University.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8715915,"math_prob":0.49513233,"size":11947,"snap":"2019-13-2019-22","text_gpt3_token_len":2541,"char_repetition_ratio":0.14200787,"word_repetition_ratio":0.17307693,"special_character_ratio":0.2078346,"punctuation_ratio":0.07089947,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9728374,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-21T02:35:26Z\",\"WARC-Record-ID\":\"<urn:uuid:b5707258-55ab-4c5e-934d-6f3416ad3e0b>\",\"Content-Length\":\"153777\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8137e007-2f7e-4657-979a-d0c06412eb2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:1066839a-2e5e-4c49-81a4-a6a20f8b19d7>\",\"WARC-IP-Address\":\"54.174.120.169\",\"WARC-Target-URI\":\"https://studypooldev.com/discuss/8162179/complete-the-multiple-choice-questions-correctly\",\"WARC-Payload-Digest\":\"sha1:4RQZOEXIPNNBVE7PPX465M6R2MTRXTF6\",\"WARC-Block-Digest\":\"sha1:66V5UCDRJ2TQ3K4245IXUVRTCWUXIJDX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256215.47_warc_CC-MAIN-20190521022141-20190521044141-00308.warc.gz\"}"}
http://specialfunctionswiki.org/index.php/Antiderivative_of_sine_integral
[ "# Antiderivative of sine integral\n\nThe following formula holds: $$\\displaystyle\\int \\mathrm{Si}(z) \\mathrm{d}z = \\cos(z) + z \\mathrm{Si}(z) + C,$$ where $\\mathrm{Si}$ denotes the sine integral and $\\cos$ denotes cosine." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53389186,"math_prob":1.000002,"size":250,"snap":"2022-27-2022-33","text_gpt3_token_len":75,"char_repetition_ratio":0.16260162,"word_repetition_ratio":0.0,"special_character_ratio":0.264,"punctuation_ratio":0.075,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999486,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T16:27:30Z\",\"WARC-Record-ID\":\"<urn:uuid:71d02da5-1963-4a1a-b622-afe9939c2370>\",\"Content-Length\":\"14294\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5cb4917f-d93a-40d7-90f8-9f2553101346>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a011122-b6c5-48b7-a70d-95d2091d857c>\",\"WARC-IP-Address\":\"69.164.202.46\",\"WARC-Target-URI\":\"http://specialfunctionswiki.org/index.php/Antiderivative_of_sine_integral\",\"WARC-Payload-Digest\":\"sha1:VCCFQSYJAL24LIA2JVTFIWOH6WUFYYUR\",\"WARC-Block-Digest\":\"sha1:L24ZEL7WFVFLQFHGNKJTCPRQI47DTUXE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570868.47_warc_CC-MAIN-20220808152744-20220808182744-00451.warc.gz\"}"}
https://meta.stackoverflow.com/questions/346351/what-is-the-average-number-of-accepted-answers-number-of-answers-ratio
[ "# What is the average number of accepted answers / number of answers ratio?\n\nI am very curious about the dynamics of answering questions. I often learn a lot from other answers to the question I answered, even when mine was accepted. So I understand unaccepted answers as helpful too. But I was wondering: what is the average per user number of accepted answers / number of answers ratio? Even if we do not consider the accepted answer as the only one helpful to all viewers of a post, I think such a ratio would be quite interesting.\n\nI meant to consider answers on questions with multiple answers and an accepted answer. If I had 30 answers on questions with an accepted answer and multiple answers and 15 of those were the accepted answers, my ratio would be 0.5\n\n• given that a user can only accept at most one answer, what would this number mean? If all questions had an accepted answer the ratio would still not be 1:1 – Robert Longson Apr 2 '17 at 7:58\n• An example: I answered 30 questions. 15 of my answers were accepted. Ratio: 0.5 – Julian Wittische Apr 2 '17 at 8:05\n• Some askers never accept anything so if your ratio is 0 it could be either down to a) you answered questions where the user can't be bothered or doesn't know how to accept answers or b) your answer is not as helpful as some other answer. The latter might be useful to know so you can improve, the former not so much. – Robert Longson Apr 2 '17 at 8:09\n• Maybe what you really want to know is your ratio when applied to answers to questions with more than one answer, one of which is accepted. – Robert Longson Apr 2 '17 at 8:11\n• This used to be a metric, but was ultimately removed due to users conditionally offering help to another user based on it. TL;DR: Yes, this info can be gleaned via SEDE (as in another answer points out), but it's not entirely useful for you as a mere mortal. – Makoto Apr 2 '17 at 21:17\n• @Makoto it's not the same value as accept rate. In fact it's pretty different to that as both the numerator and denominator only count questions that have an accepted answer. – Robert Longson Apr 2 '17 at 21:21\n• @RobertLongson Other possibilities that confuse matters even further include (c) the asker prematurely accepts a FGITW answer less helpful than yours; and (d) your answer and some other answer mutually complement each other, leading the asker to toss a coin for deciding who gets the check mark. – duplode Apr 4 '17 at 15:02\n• @duplode Those issues apply to everyone though so if your percentage is significantly lower than average then perhaps you've something to work on. It seems that experienced answerers are around the 60-65% mark (e.g. Jon Skeet is at 65%). – Robert Longson Apr 4 '17 at 15:06\n• @RobertLongson I would also imagine the accept rate would be lower(on average) in very popular tags and higher in niche tags with fewer experienced users providing answers. – Booga Roo Apr 5 '17 at 0:18\n\n## 1 Answer\n\nYou can use SEDE to query the database:\n\n``````if (select count(*) as total\nfrom posts q\ninner join posts a on a.parentid = q.id\nwhere q.posttypeid = 1\nand a.posttypeid = 2\nand q.acceptedanswerid is not null\nand q.answercount > 1\nand a.owneruserid = ##UserId##) > 0\nselect cast(cast(mine as float) / total * 100 as varchar) + '%' from\n(select count(*) as mine\nfrom posts q\ninner join posts a on a.parentid = q.id\nwhere q.posttypeid = 1\nand a.posttypeid = 2\nand q.acceptedanswerid is not null\nand q.answercount > 1\nand a.owneruserid = ##UserId##\nand q.acceptedanswerid = a.id) mine,\n(select count(*) as total\nfrom posts q\ninner join posts a on a.parentid = q.id\nwhere q.posttypeid = 1\nand a.posttypeid = 2\nand q.acceptedanswerid is not null\nand q.answercount > 1\nand a.owneruserid = ##UserId##) total\nelse select 'User has not answered any questions where there are multiple answers, one of which is accepted'\n``````\n\nHere's a link to the live query.\n\nYour score is 36% on stackoverflow FWIW.\n\n• I am going to try to write a query to create a histogram with the score of many users to be able to figure out where one stands and what the distribution looks like. – Julian Wittische Apr 3 '17 at 20:36\n• Should I worry about the divide by zero error I get when I run this? – Jonathan Leffler Apr 4 '17 at 6:20\n• @JonathanLeffler Your id gives me 46%, so presumably you're using a different one. I guess whatever user id you're using has simply not answered any questions where there are 2 or more answers one of which is accepted. In that case they have no ratio really. – Robert Longson Apr 4 '17 at 6:47\n• OK; I suffer from digital dyslexia at this time of night — I mistyped my user number and the code is not resilient to non-existent users or users with insufficient data to produce an answer. The user ID that I mistyped hasn't been seen since Feb 2009 and has 3 points rep in total. – Jonathan Leffler Apr 4 '17 at 6:55\n• @JonathanLeffler I've updated the answer though it does make it somewhat lengthier. – Robert Longson Apr 4 '17 at 7:04\n• New users; new bugs! Isn't that how it usually works? Apologies for managing to find the problem. It really wasn't my intention! – Jonathan Leffler Apr 4 '17 at 7:07\n• Either there is a Bug here or I got really 146% of my answers accepted ... not bad :D. – Tom Apr 4 '17 at 14:18\n• @tom I get 62% with your ID – Robert Longson Apr 4 '17 at 14:21\n• Yeah, it's strange, there are two different versions of your query, but they look the same: version 1 with 146% and version 2 with 62%. – Tom Apr 4 '17 at 14:28\n• @tom Someone else edited and broke what you're calling version 1. It's not my query any more. – Robert Longson Apr 4 '17 at 14:31\n• But what's the difference in both version? Looks quite the same to me. – Tom Apr 4 '17 at 14:34\n• @Tom q.answercount > 1 vs q.answercount = 1 third line from the bottom. – Robert Longson Apr 4 '17 at 14:35\n• Oh, good catch :D. – Tom Apr 4 '17 at 14:39\n• Probably the only metric where I (76%) rank higher than Jon Skeet (66%), must be a bug. – SleuthEye Apr 5 '17 at 0:16" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9876671,"math_prob":0.4501534,"size":687,"snap":"2021-21-2021-25","text_gpt3_token_len":141,"char_repetition_ratio":0.21669106,"word_repetition_ratio":0.0,"special_character_ratio":0.20669578,"punctuation_ratio":0.080882356,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9616733,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T16:28:50Z\",\"WARC-Record-ID\":\"<urn:uuid:19caa329-ca65-43a9-a378-9956a8bade89>\",\"Content-Length\":\"160079\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9fa3ea26-c8fd-4c42-873e-d1d52a006a03>\",\"WARC-Concurrent-To\":\"<urn:uuid:7af2c80f-9dbc-416b-b7ef-f7bdacc8062a>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://meta.stackoverflow.com/questions/346351/what-is-the-average-number-of-accepted-answers-number-of-answers-ratio\",\"WARC-Payload-Digest\":\"sha1:SDNIPM5OXV234WRHDJGL5N72MVOIE2CB\",\"WARC-Block-Digest\":\"sha1:ATPAFANIBLRLZ4TLIWWKHKSARJVPPYPP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488556133.92_warc_CC-MAIN-20210624141035-20210624171035-00099.warc.gz\"}"}
https://fintechprofessor.com/2017/12/10/fama-and-macbeth-1973-fastest-regression-in-stata/comment-page-1/
[ "The Fama-McBeth (1973) regression is a two-step procedure . The first step involves estimation of N cross-sectional regressions and the second step involves T time-series averages of the coefficients of the N-cross-sectional regressions. The standard errors are adjusted for cross-sectional dependence. This is generally an acceptable solution when there is a large number of cross-sectional units and a relatively small time series for each cross-sectional unit. However, if both cross-sectional and time-series dependencies are suspected in the data set, then Newey-West consistent standard errors can be an acceptable solution.\n\n## Estimation Procedure\n\nThe Fama-McBeth (FMB) can be easily estimated in Stata using asreg package.  Consider the following three steps for estimation of FMB regression in Stata.\n\n1.  Arrange the data as panel data and use xtset command to tell Stata about it.\n\n2.  Install asreg from ssc with this line of code:\n\n`ssc install asreg`\n\n3. Apply asreg command with fmb option\n\n## An Example\n\nWe shall use the grunfeld dataset in our example. Let’s download it first:\n\n`webuse grunfeld`\n\nThis data is already xtset, with the following command:\n\n`xtset company year`\n\nAssume that we want to estimate a FMB regression where the dependent variable is invest and independent variables are mvalue and kstock. Just like regress command, asreg uses the first variable as dependent variable and rest of the variables as independent variables. Using the grunfeld data, asreg command for FMB regression is given below:\n\n`asreg invest mvalue kstock, fmb`\n``` Fama-MacBeth (1973) Two-Step procedure Number of obs = 200 Num. time periods = 20\nF( 2, 19) = 195.04\nProb > F = 0.0000\navg. R-squared = 0.8369\n------------------------------------------------------------------------------\n| Fama-MacBeth\ninvest | Coef. Std. Err. t P>|t| [95% Conf. Interval]\n-------------+----------------------------------------------------------------\nmvalue | .1306047 .0093422 13.98 0.000 .1110512 .1501581\nkstock | .0729575 .0277398 2.63 0.016 .0148975 .1310176\n_cons | -14.75697 7.287669 -2.02 0.057 -30.01024 .496295\n------------------------------------------------------------------------------```\n\n## Newey-West standard errors\n\nIf Newey-West standard errors are required for the second stage regression, we can use the option newey(integer).  The integer value specifies the number of lags for estimation of Newey-West consistent standard errors. Please note that without using option newey, asreg estimates normal standard errors of OLS. This option accepts only integers, for example newey(1) or newey(4) are acceptable, but newey(1.5) or newey(2.3) are not. So if we were to use two lags with the Newey-West error for the above command, we shall type;\n\n```asreg invest mvalue kstock, fmb newey(2)\nFama-MacBeth Two-Step procedure (Newey SE) Number of obs = 200\n(Newey-West adj. Std. Err. using lags(2)) Num. time periods = 20\nF( 2, 19) = 39.73\nProb > F = 0.0000\navg. R-squared = 0.8369\n---------------------------------------------------------------------------------\n| Newey-FMB\ninvest | Coef. Std. Err. t P>|t| [95% Conf. Interval]\n-------------+-------------------------------------------------------------------\nmvalue | .1306047 .0150138 8.70 0.000 .0991804 .1620289\nkstock | .0729575 .0375046 1.95 0.067 -.0055406 .1514557\n_cons | -14.75697 8.394982 -1.76 0.095 -32.32787 2.813928\n---------------------------------------------------------------------------------\n\n```\n\nIf we wished to display the first stage N – cross-sectional regressions of the FMB procedure, we can use the option first. And if we wish to save the first stage results to a file, we can use the option save(filename). Therefore, commands for these options will look like:\n\n```asreg invest mvalue kstock, fmb newey(2) first\n\nasreg invest mvalue kstock, fmb newey(2) first save(FirstStage)\n```\nFirst stage Fama-McBeth regression results\n _TimeVar _obs _R2 _b_mva~e _b_kstock _Cons 1935 10 .865262 .1024979 -.0019948 .3560334 1936 10 .6963937 .0837074 -.0536413 15.21895 1937 10 .6637627 .0765138 .2177224 -3.386471 1938 10 .7055773 .0680178 .2691146 -17.5819 1939 10 .8266015 .0655219 .1986646 -21.15423 1940 10 .8392551 .095399 .2022906 -27.04707 1941 10 .8562148 .1147638 .177465 -16.51949 1942 10 .857307 .1428251 .071024 -17.61828 1943 10 .842064 .1186095 .1054119 -22.7638 1944 10 .875515 .1181642 .0722072 -15.82815 1945 10 .9067973 .1084709 .0502208 -10.51968 1946 10 .8947517 .1379482 .0054134 -5.990657 1947 10 .8912394 .163927 -.0037072 -3.732489 1948 10 .7888235 .1786673 -.0425555 8.53881 1949 10 .8632568 .1615962 -.0369651 5.178286 1950 10 .8577138 .1762168 -.0220956 -12.17468 1951 10 .873773 .1831405 -.1120569 26.13816 1952 10 .8461224 .1989208 -.067495 7.29284 1953 10 .8892606 .1826739 .0987533 -50.15255 1954 10 .8984501 .1345116 .3313746 -133.3931\nFama-MacBeth Two-Step procedure (Newey SE)\ninvest Coef. St.Err. t-value p-value [95% Co Interval] Sig\nmvalue 0.131 0.015 8.70 0 0.099 0.162 ***\nkstock 0.073 0.038 1.95 0.068 -0.006 0.152 *\ncons -14.757 8.395 -1.76 0.097 -32.469 2.955 *\nMean dependent var SD dependent var 216.875\nR-squared Number of obs 200.000\nF-test Prob > F 0.000\nNotes: *** p<.01, ** p<.05, * p<.1\n\n## Paid help for FMB regression\n\nThe Fama and Macbeth regression are extensively used in testing asset pricing models. As discussed in this post, the typical use is to divide the sample period in several periods, make portfolios, find their betas in the first period, make portfolios on those beta-rankings, find returns in the subsequent periods, and regression those returns on the portfolio betas. These steps are usually computationally extensive and hard to understand. We offer a paid help in such cases. We also provide help if a researcher has a unique research design and wants to apply the FMB regression. See this page for pricing options and other details.\n\n## More on FMB regression\n\nFMB regression – what, how and where\n\nFMB regressions with 25-portfolios – An example\n\nRolling window statistics with asrol" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88323444,"math_prob":0.7821725,"size":23362,"snap":"2022-40-2023-06","text_gpt3_token_len":5730,"char_repetition_ratio":0.14967035,"word_repetition_ratio":0.051381357,"special_character_ratio":0.28208202,"punctuation_ratio":0.12440087,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96556425,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T15:50:09Z\",\"WARC-Record-ID\":\"<urn:uuid:4a5dd4aa-5a59-4855-b700-75ff01358520>\",\"Content-Length\":\"219237\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:af5c168c-8bbf-47c1-8122-9fcb4e7a4945>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa84eef8-51b7-4425-a818-cbd9fc76d557>\",\"WARC-IP-Address\":\"213.59.121.50\",\"WARC-Target-URI\":\"https://fintechprofessor.com/2017/12/10/fama-and-macbeth-1973-fastest-regression-in-stata/comment-page-1/\",\"WARC-Payload-Digest\":\"sha1:T4A5Q4UJAWVNZKLQGS2ETRVIY72P5JLF\",\"WARC-Block-Digest\":\"sha1:2HSVDI5ZKEYT4CBG5BA6M6KFDZVRY3SV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500356.92_warc_CC-MAIN-20230206145603-20230206175603-00816.warc.gz\"}"}
https://dxr.mozilla.org/mozilla-central/source/testing/web-platform/tests/shadow-dom/Extensions-to-Event-Interface.html
[ "DXR is a code search and navigation tool aimed at making sense of large projects. It supports full-text and regex searches as well as structural queries.\n\n#### Mercurial (84bc52da6c3b)\n\nLine Code\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224\n``````<!DOCTYPE html>\n``````<html>\n``````<head>\n``````<title>Shadow DOM: Extensions to Event Interface</title>\n``````<meta name=\"author\" title=\"Ryosuke Niwa\" href=\"mailto:[email protected]\">\n``````<meta name=\"assert\" content=\"Event interface must have composedPath() as a method\">\n``````<link rel=\"help\" href=\"http://w3c.github.io/webcomponents/spec/shadow/#extensions-to-event-interface\">\n``````<script src=\"/resources/testharness.js\"></script>\n``````<script src=\"/resources/testharnessreport.js\"></script>\n``````<script src=\"resources/event-path-test-helpers.js\"></script>\n``````</head>\n``````<body>\n``````<div id=\"log\"></div>\n``````<script>\n``````\n``````test(function () {\n`````` var event = new Event('my-event');\n`````` assert_array_equals(event.composedPath(), []);\n``````}, 'composedPath() must return an empty array when the event has not been dispatched');\n``````\n``````test(function () {\n`````` var event = new Event('my-event');\n`````` document.body.dispatchEvent(event);\n`````` assert_array_equals(event.composedPath(), []);\n``````}, 'composedPath() must return an empty array when the event is no longer dispatched');\n``````\n``````test(function () {\n`````` var event = new Event('my-event');\n`````` assert_false(event.composed);\n``````}, 'composed on EventInit must default to false');\n``````\n``````test(function () {\n`````` var event = new Event('my-event', {composed: true});\n`````` assert_true(event.composed);\n``````\n`````` event = new Event('my-event', {composed: false});\n`````` assert_false(event.composed);\n``````}, 'composed on EventInit must set the composed flag');\n``````\n``````/*\n``````-SR: ShadowRoot -S: Slot target: (~) *: indicates start digit: event path order\n``````A (4) --------------------------- A-SR (3)\n``````+ B ------------ B-SR + A1 (2) --- A1-SR (1)\n`````` + C + B1 --- B1-SR + A2-S + A1a (*; 0)\n`````` + D --- D-SR + B1a + B1b --- B1b-SR\n`````` + D1 + B1c-S + B1b1\n`````` + B1b2\n``````*/\n``````\n``````function testComposedEvent(mode) {\n`````` test(function () {\n`````` var nodes = createFixedTestTree(mode);\n`````` var log = dispatchEventWithEventLog(nodes, nodes.A1a, new Event('my-event', {composed: true, bubbles: true}));\n``````\n`````` var expectedPath = ['A1a', 'A1-SR', 'A1', 'A-SR', 'A'];\n`````` assert_array_equals(log.eventPath, expectedPath);\n`````` assert_equals(log.eventPath.length, log.pathAtTargets.length);\n`````` assert_array_equals(log.pathAtTargets, expectedPath);\n`````` assert_array_equals(log.pathAtTargets, expectedPath);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : ['A1', 'A-SR', 'A'],\n`````` 'composedPath must only contain unclosed nodes of the current target.');\n`````` }, 'The event must propagate out of ' + mode + ' mode shadow boundaries when the composed flag is set');\n``````}\n``````\n``````testComposedEvent('open');\n``````testComposedEvent('closed');\n``````\n``````/*\n``````-SR: ShadowRoot -S: Slot target: (~) *: indicates start digit: event path order\n``````A ------------------------------- A-SR\n``````+ B ------------ B-SR + A1 --- A1-SR (1)\n`````` + C + B1 --- B1-SR + A2-S + A1a (*; 0)\n`````` + D --- D-SR + B1a + B1b --- B1b-SR\n`````` + D1 + B1c-S + B1b1\n`````` + B1b2\n``````*/\n``````\n``````function testNonComposedEvent(mode) {\n`````` test(function () {\n`````` var nodes = createFixedTestTree(mode);\n`````` var log = dispatchEventWithEventLog(nodes, nodes.A1a, new Event('my-event', {composed: false, bubbles: true}));\n``````\n`````` var expectedPath = ['A1a', 'A1-SR'];\n`````` assert_array_equals(log.eventPath, expectedPath);\n`````` assert_equals(log.eventPath.length, log.pathAtTargets.length);\n`````` assert_array_equals(log.pathAtTargets, expectedPath);\n`````` assert_array_equals(log.pathAtTargets, expectedPath);\n`````` }, 'The event must not propagate out of ' + mode + ' mode shadow boundaries when the composed flag is unset');\n``````}\n``````\n``````testNonComposedEvent('open');\n``````testNonComposedEvent('closed');\n``````\n``````/*\n``````-SR: ShadowRoot -S: Slot target: (~) relatedTarget: [~] *: indicates start digit: event path order\n``````A ------------------------------- A-SR\n``````+ B ------------ B-SR + A1 ----------- A1-SR (1)\n`````` + C + B1 --- B1-SR + A2-S [*; 0-1] + A1a (*; 0)\n`````` + D --- D-SR + B1a + B1b --- B1b-SR\n`````` + D1 + B1c-S + B1b1\n`````` + B1b2\n``````*/\n``````\n``````function testNonComposedEventWithRelatedTarget(mode) {\n`````` test(function () {\n`````` var nodes = createFixedTestTree(mode);\n`````` var log = dispatchEventWithEventLog(nodes, nodes.A1a, new MouseEvent('foo', {composed: false, bubbles: true, relatedTarget: nodes['A2-S']}));\n``````\n`````` var expectedPath = ['A1a', 'A1-SR'];\n`````` assert_array_equals(log.eventPath, expectedPath);\n`````` assert_equals(log.eventPath.length, log.pathAtTargets.length);\n`````` assert_array_equals(log.pathAtTargets, expectedPath);\n`````` assert_array_equals(log.pathAtTargets, expectedPath);\n`````` assert_array_equals(log.relatedTargets, ['A2-S', 'A2-S']);\n`````` }, 'The event must not propagate out of ' + mode + ' mode shadow boundaries when the composed flag is unset on an event with relatedTarget');\n``````}\n``````\n``````testNonComposedEventWithRelatedTarget('open');\n``````testNonComposedEventWithRelatedTarget('closed');\n``````\n``````/*\n``````-SR: ShadowRoot -S: Slot target: (~) relatedTarget: [~] *: indicates start digit: event path order\n``````A ------------------------------------------------ A-SR\n``````+ B ------------ B-SR (4) + A1 --- A1-SR\n`````` + C + B1 (3) [0,3-4] --- B1-SR (2) + A2-S + A1a\n`````` + D --- D-SR + B1a (*; 0) + B1b [1-2] --- B1b-SR\n`````` + D1 + B1c-S (1) + B1b1\n`````` + B1b2 [*]\n``````*/\n``````\n``````function testScopedEventWithUnscopedRelatedTargetThroughSlot(mode) {\n`````` test(function () {\n`````` var nodes = createFixedTestTree(mode);\n`````` var log = dispatchEventWithEventLog(nodes, nodes.B1a, new MouseEvent('foo', {scoped: true, relatedTargetScoped: false, bubbles: true, relatedTarget: nodes['B1b2']}));\n``````\n`````` var expectedPath = ['B1a', 'B1c-S', 'B1-SR', 'B1', 'B-SR'];\n`````` var pathExposedToB1a = ['B1a', 'B1', 'B-SR'];\n`````` assert_array_equals(log.eventPath, expectedPath);\n`````` assert_equals(log.eventPath.length, log.pathAtTargets.length);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToB1a);\n`````` assert_array_equals(log.pathAtTargets, expectedPath);\n`````` assert_array_equals(log.pathAtTargets, expectedPath);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToB1a);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToB1a);\n`````` assert_array_equals(log.relatedTargets, ['B1', 'B1b', 'B1b', 'B1', 'B1']);\n`````` }, 'The event must not propagate out of ' + mode + ' mode shadow tree of the target but must propagate out of inner shadow trees when the scoped flag is set');\n``````}\n``````\n``````testScopedEventWithUnscopedRelatedTargetThroughSlot('open');\n``````testScopedEventWithUnscopedRelatedTargetThroughSlot('closed');\n``````\n``````/*\n``````-SR: ShadowRoot -S: Slot target: (~) relatedTarget: [~] *: indicates start digit: event path order\n``````A ------------------------------- A-SR (3)\n``````+ B ------------ B-SR + A1 (2) ------- A1-SR (1)\n`````` + C + B1 --- B1-SR + A2-S [*; 0-3] + A1a (*; 0)\n`````` + D --- D-SR + B1a + B1b --- B1b-SR\n`````` + D1 + B1c-S + B1b1\n`````` + B1b2\n``````*/\n``````\n``````function testComposedEventWithRelatedTarget(mode) {\n`````` test(function () {\n`````` var nodes = createFixedTestTree(mode);\n`````` log = dispatchEventWithEventLog(nodes, nodes.A1a, new MouseEvent('foo', {composed: true, bubbles: true, relatedTarget: nodes['A2-S']}));\n``````\n`````` var expectedPath = ['A1a', 'A1-SR', 'A1', 'A-SR'];\n`````` var pathExposedToA1 = ['A1', 'A-SR'];\n`````` assert_array_equals(log.eventPath, expectedPath);\n`````` assert_equals(log.eventPath.length, log.pathAtTargets.length);\n`````` assert_array_equals(log.pathAtTargets, expectedPath);\n`````` assert_array_equals(log.pathAtTargets, expectedPath);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToA1);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToA1);\n`````` assert_array_equals(log.relatedTargets, ['A2-S', 'A2-S', 'A2-S', 'A2-S']);\n`````` }, 'The event must propagate out of ' + mode + ' mode shadow tree in which the relative target and the relative related target are the same');\n``````}\n``````\n``````testComposedEventWithRelatedTarget('open');\n``````testComposedEventWithRelatedTarget('closed');\n``````\n``````/*\n``````-SR: ShadowRoot -S: Slot target: (~) relatedTarget: [~] *: indicates start digit: event path order\n``````A (8) [0-5,8] ---------------------------------------- A-SR (7)\n``````+ B (5) ------- B-SR (4) + A1 [6,7] --- A1-SR\n`````` + C + B1 (3) ------- B1-SR (2) + A2-S (6) + A1a [*]\n`````` + D --- D-SR + B1a (*; 0) + B1b ------- B1b-SR\n`````` + D1 + B1c-S (1) + B1b1\n`````` + B1b2\n``````*/\n``````\n``````function testComposedEventThroughSlot(mode) {\n`````` test(function () {\n`````` var nodes = createFixedTestTree(mode);\n`````` log = dispatchEventWithEventLog(nodes, nodes.B1a, new MouseEvent('foo', {composed: true, bubbles: true, relatedTarget: nodes.A1a}));\n``````\n`````` var expectedPath = ['B1a', 'B1c-S', 'B1-SR', 'B1', 'B-SR', 'B', 'A2-S', 'A-SR', 'A'];\n`````` var expectedRelatedTarget = ['A', 'A', 'A', 'A', 'A', 'A', 'A1', 'A1', 'A'];\n`````` var pathExposedToB1a = ['B1a', 'B1', 'B-SR', 'B', 'A'];\n`````` var pathExposedToB1cS = ['B1a', 'B1c-S', 'B1-SR', 'B1', 'B-SR', 'B', 'A'];\n`````` var pathExposedToB = [ 'B', 'A'];\n`````` var pathExposedToA1 = [ 'B', 'A2-S', 'A-SR', 'A'];\n``````\n`````` assert_array_equals(log.eventPath, expectedPath);\n`````` assert_equals(log.eventPath.length, log.pathAtTargets.length);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToB1a);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToB1cS);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToB1cS);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToB1a);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToB1a);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToB);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToA1);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToA1);\n`````` assert_array_equals(log.pathAtTargets, mode == 'open' ? expectedPath : pathExposedToB);\n`````` assert_array_equals(log.relatedTargets, expectedRelatedTarget);\n`````` }, 'composedPath() must contain and only contain the unclosed nodes of target in ' + mode + ' mode shadow trees');\n``````}\n``````\n``````testComposedEventThroughSlot('open');\n``````testComposedEventThroughSlot('closed');\n``````\n``````</script>\n``````</body>\n``````</html>\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.547415,"math_prob":0.8532203,"size":10425,"snap":"2020-10-2020-16","text_gpt3_token_len":3294,"char_repetition_ratio":0.21782938,"word_repetition_ratio":0.4375,"special_character_ratio":0.3954916,"punctuation_ratio":0.23273942,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9797319,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-06T09:45:48Z\",\"WARC-Record-ID\":\"<urn:uuid:0d65c254-e6e9-40b3-a295-b1f978cf9ccc>\",\"Content-Length\":\"138897\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d8a368c8-df10-4f1e-ace3-36aba9eafc62>\",\"WARC-Concurrent-To\":\"<urn:uuid:6be266ef-6ff9-41de-a29f-086776a26bc6>\",\"WARC-IP-Address\":\"63.245.208.205\",\"WARC-Target-URI\":\"https://dxr.mozilla.org/mozilla-central/source/testing/web-platform/tests/shadow-dom/Extensions-to-Event-Interface.html\",\"WARC-Payload-Digest\":\"sha1:S5UEBCKYNPJB5IMIHOZBIPWR3ZA7MN4C\",\"WARC-Block-Digest\":\"sha1:PPRAEU46ZNGBTVS635BVZVWRCIYOJIBR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371620338.63_warc_CC-MAIN-20200406070848-20200406101348-00296.warc.gz\"}"}
https://ssa.cf.ac.uk/coia/rg_oia.html
[ "## Optimisation in Image Analysis", null, "#### Collaborative filtering for the analysis of colour images\n\nMost images can be approximated with high accuracy by an image with sparse representation in some basis. A representation is sparse if only a small number of coefficients in a linear combination are non-zero. The problem of optimal selection of the set of these non-zero coefficients is an optimisation problem in L_0 space. Very often, the solution to this difficult optimisation problem can be well approximated by a related optimisation problem in L_1 space. This problem is much simpler. In particular, the solution to this problem is a limit of a sequence of solutions of optimisation problems in L_2 space.\n\n#### Efficient storage of images using sparse representations\n\nCollaborative filtering is the process of filtering for information or patterns using techniques involving collaboration among multiple data sources. We apply these techniques for the analysis and efficient storage of colour images, which can be considered as a set of three highly correlated images.\n\n#### Analysis of similarity of images using SSA techniques\n\nSingular spectrum analysis (SSA) can be used for analysing not only time series but images too. Application of SSA to a set of images can be used for classification of images and identifying similar images (for example, images of the same person). We can also define a distance between images based on SSA-similarity between these images. First results show that SSA based classification of images often works better that the Support Vector Machines.\n\n#### Cardiff investigators\n\n• Prof Russell Davies\n• Prof Alexander Balinsky\n• Prof Anatoly Zhigljavsky\n• Dr Andrey Pepelyshev\n• Dr Valentina Moskvina" ]
[ null, "https://ssa.cf.ac.uk/coia/lena5.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8613354,"math_prob":0.7153661,"size":1984,"snap":"2022-27-2022-33","text_gpt3_token_len":420,"char_repetition_ratio":0.12878788,"word_repetition_ratio":0.0,"special_character_ratio":0.18800403,"punctuation_ratio":0.08459215,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95730627,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-07T22:43:26Z\",\"WARC-Record-ID\":\"<urn:uuid:c18c9f72-d73c-418c-ab68-c5d7b9402792>\",\"Content-Length\":\"7995\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25b935d1-ebb3-4b2b-916e-26064958f06b>\",\"WARC-Concurrent-To\":\"<urn:uuid:21de7c97-9b31-4d8a-bb42-e3b62077d8f4>\",\"WARC-IP-Address\":\"131.251.250.80\",\"WARC-Target-URI\":\"https://ssa.cf.ac.uk/coia/rg_oia.html\",\"WARC-Payload-Digest\":\"sha1:RKMNCK3ADKUV6CWTM57J3IQ7SKPHAWAY\",\"WARC-Block-Digest\":\"sha1:WDGDI64KDQARDM4UUAC5BNAFZTPV6A2K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570730.59_warc_CC-MAIN-20220807211157-20220808001157-00780.warc.gz\"}"}
https://www.physicsforums.com/threads/anyone-familiar-with-abels-insolubility-of-the-quintic-proof.345743/
[ "# Anyone familiar with Abel's insolubility of the quintic proof\n\nGreetings PF,\n\nI would like to sketch the proof that Abel gave for the insolubility of the quintic. This is a question of general interest to me, and one of those things, that I probably should see once. I am going off a translation of Abel's proof, and I hope to get very general guidance to the nature of the proof, i.e. what Abel was going for. Ultimately, I want to distill the proof into the elementary steps in an intuitive (?ahem?) manner, so that 1) a good sketch of the proof could be given, and 2) from the sketch, some form of the proof could be given that does NOT use full blown Galois theory, or any facts like Sym_5 is insoluble (although this does implicitly come up in the proof since ultimately the contraction that 120 = 10, is ultimately what causes the reductio to unravel.), and meanwhile, fully deconstruct the proof for understanding's sake.\n\nSo to start we assume that the quintic is soluble, so that\n$$y^5 + ay^4 + by^3 + cy^2 + dy + e$$ is solvable precisely in terms of radicals alone.\n\nNext we must use a consideration about the general form that any polynomial p(x) soluble by radicals must have. If it is soluble, it must be the sum of (possibly nested) algebraic manipulations (roots, exp, mult,div,plus,sub) of the coefficients of p(x)\n$$y = c_0 + c_1R^{\\frac{1}{m} + \\ldots + c_{m-1}$$,\nwhere in the above $$c_i, R$$ are all functions containing (nested algebraic) rational functions of only the coefficients. However, I don't understand the step where Abel claims that one can assume m to be prime. What realization have I missed. I know that ultimately, I am only desiring a deconstruction into a sketch, but I would like to do a full deconstruction in the process.\n\nSo far, the steps are\n1) Goal is to prove the quintic is in general insolvable by radicals.\n2) Assume it is to derive a contradiction\n3) Prove that the general form of the solution must be as expressed above.\n\nAnd for\n3a) Assert that y must be definable as a rational function of algebraic expressions of coefficients\n3b) Consider the coefficients a,b,c,d,e, and f(a,b,c,d,e) as a rational function of the coeff (as zero order)\n3c) consider$$f({p_0}^{\\frac{1}{m}})$$ as a \"first order\" algebraic function, where p_0 is zero order.\n3d) similarly $$f({p_m}^{\\frac{1}{m}})$$ is an m'th order function.\n3e) See that factoring m, and writing $$R^{\\frac{1}{m}}$$ as a sequence of successive radicals, we are allowed to assume m is prime (my problem)\n\nRelated Linear and Abstract Algebra News on Phys.org\nHurkyl\nStaff Emeritus\n$$\\sqrt[p]{\\sqrt[q]{R}} = \\sqrt[pq]{R}$$​\n$$\\sqrt{x}$$, it can be written as $$\\sqrt{\\sqrt{x}}$$. So assuming m is prime is fine." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9332391,"math_prob":0.99627715,"size":4974,"snap":"2021-04-2021-17","text_gpt3_token_len":1326,"char_repetition_ratio":0.11790744,"word_repetition_ratio":0.98472387,"special_character_ratio":0.26075593,"punctuation_ratio":0.109073356,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99955386,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-27T00:30:23Z\",\"WARC-Record-ID\":\"<urn:uuid:7e169b39-8483-42bf-be1a-1158321d5f59>\",\"Content-Length\":\"65463\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb76f255-ca2e-4598-9a0d-2338ff13bbbf>\",\"WARC-Concurrent-To\":\"<urn:uuid:93fe3a39-2d39-4dfb-84ed-c6d397fecd6e>\",\"WARC-IP-Address\":\"104.26.15.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/anyone-familiar-with-abels-insolubility-of-the-quintic-proof.345743/\",\"WARC-Payload-Digest\":\"sha1:VGH32OM63545Z5HURUX6NDFXCSXSLC6Z\",\"WARC-Block-Digest\":\"sha1:OCGEV2DXLXDSKFY6DFIQVRTYUBYGMHV4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704804187.81_warc_CC-MAIN-20210126233034-20210127023034-00607.warc.gz\"}"}
http://lauriesbookshelf.com/find-the-gcf-of-16-and-27/
[ "# Find The Gcf Of 16 And 27.\n\nFind The Gcf Of 16 And 27.. Use lcm to get gcf of 16 and 27. The first step to find the gcf of 16 and 27 is to list the factors of each number.", null, "GCF of 9 and 15 How to Find GCF of 9, 15? from www.cuemath.com\n\nThe first step to find the gcf of 16 and 27 is to list the factors of each number. Given input numbers are 27, 48, 9. 1 the largest number that divides both the numbers exactly.\n\n### Find The Gcf Of 15 And 35.\n\nThere are multiple ways to find the greatest common factor of given integers. For smaller numbers you can simply look at. If you're still in whole.\n\n### Here's How To Calculate Gcf Of 16 And 1000 Using The Formula, Step By Step Instructions Are Given Inside.\n\n1 the largest number that divides both the numbers exactly. Divide 27 (larger number) by 6 (smaller. Starting with the number 1 upto 8 (half of 16) and 1 upto 13 (half of 27).\n\n### Find The Gcf Of 16 And 27.\n\nFind the prime factorization of 27 27 = 3 x 3 x 3 step 3: List of positive integer factors of 27 that divides 27. Greatest common factor (gcf) of 16 and 8 is 8.\n\n### The Prime Factorization Of 27 Is 3 X 3 X 3 = 27.\n\nIn our second method, we'll create a list of all the factors of the 16 and 27 numbers. Gcf of 6 and 27 is the divisor that we get when the remainder becomes 0 after doing long division repeatedly. Find the prime factorization of 16.\n\n### Find The Gcf Of 15 And 35.\n\nFind the gcf of 16 and 27. To find the gcf, multiply all the prime factors common to both numbers: The number 1 and the number." ]
[ null, "data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85346365,"math_prob":0.98528147,"size":1403,"snap":"2022-40-2023-06","text_gpt3_token_len":403,"char_repetition_ratio":0.16511793,"word_repetition_ratio":0.1993007,"special_character_ratio":0.30363506,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998964,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T15:02:31Z\",\"WARC-Record-ID\":\"<urn:uuid:114845a4-5ee8-4d22-9787-fccb61ae5d98>\",\"Content-Length\":\"51945\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f16bca90-26c4-4182-9ea1-567b84d269a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb4fc6d1-2799-4fb0-a7f1-dcbcdfe23f9d>\",\"WARC-IP-Address\":\"172.67.218.31\",\"WARC-Target-URI\":\"http://lauriesbookshelf.com/find-the-gcf-of-16-and-27/\",\"WARC-Payload-Digest\":\"sha1:UTRHBVUOJ726YYLB4M4ECDIG7HWMGJAT\",\"WARC-Block-Digest\":\"sha1:ERP274AXKIOYKZBCTOXSXF3MISBGXHWC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334579.46_warc_CC-MAIN-20220925132046-20220925162046-00190.warc.gz\"}"}
https://online-calculator.org/how-old-am-i-if-i-was-born-on-6-30-1998
[ "Online Calculators > Time Calculators\n\n# How old am I if I was born on June 30, 1998?\n\nHow old am I if I was born on June 30, 1998? - June 30, 1998 age to find out how old is someone born on June 30, 1998 in years, months, weeks, days, hours, minutes and seconds.\n\n## June 30, 1998 Age\n\nYou are 21 years 5 months, 1 week, and 3 days old\n\nor 258 months old\nor 1,119 weeks old\nor 7,833 days old\nor 187,992 hours old\nor 11,279,520 minutes old\nor 676,771,200 seconds old\nYou were born on a Tuesday.\n\n## Age Calculator\n\nBirth Date:\nToday's Date:\nHow old am I if I was born in 1998\n\nElectrical Calculators\nReal Estate Calculators\nAccounting Calculators\nConstruction Calculators\nSports Calculators\n\nFinancial Calculators\nCompound Interest Calculator\nMortgage Calculator\nHow Much House Can I Afford\nLoan Calculator\nStock Calculator\nOptions Calculator\nInvestment Calculator\nRetirement Calculator\n401k Calculator\neBay Fee Calculator\nPayPal Fee Calculator\nEtsy Fee Calculator\nMarkup Calculator\nTVM Calculator\nLTV Calculator\nAnnuity Calculator\nHow Much do I Make a Year\n\nMath Calculators\nMixed Number to Decimal\nRatio Simplifier\nPercentage Calculator\n\nHealth Calculators\nBMI Calculator\nWeight Loss Calculator\n\nConversion\nCM to Feet and Inches\nMM to Inches\n\nOthers\nHow Old am I\nRandom Name Picker\nRandom Number Generator\nMultiplication Chart" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9662974,"math_prob":0.98062795,"size":454,"snap":"2019-51-2020-05","text_gpt3_token_len":148,"char_repetition_ratio":0.15777777,"word_repetition_ratio":0.0,"special_character_ratio":0.38986784,"punctuation_ratio":0.1754386,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9677145,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T17:50:30Z\",\"WARC-Record-ID\":\"<urn:uuid:0d363be3-6423-46d8-9d18-6d0b56c5222c>\",\"Content-Length\":\"12878\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0136ea73-4b4e-40b1-a6c9-bf6ebafe2e2e>\",\"WARC-Concurrent-To\":\"<urn:uuid:f29d3a28-e762-4679-8d4d-0d6eed010751>\",\"WARC-IP-Address\":\"184.154.80.211\",\"WARC-Target-URI\":\"https://online-calculator.org/how-old-am-i-if-i-was-born-on-6-30-1998\",\"WARC-Payload-Digest\":\"sha1:3B7O3BNXVQVB4XMNAOTXRLKTBQX4PFZ5\",\"WARC-Block-Digest\":\"sha1:QPE4QPZL2FBX45HIVG367HHSYMKXALPO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540528457.66_warc_CC-MAIN-20191210152154-20191210180154-00499.warc.gz\"}"}
https://www.geteasysolution.com/Factors-of-21952000
[ "# Factors of 21952000\n\nBelow you can find the full step by step solution for you problem. We hope it will be very helpful for you and it will help you to understand the solving process.\n\nIf it's not what You are looking for type in the field below your own integer, and You will get the solution.\n\nFactors of 21952000:\n\nBy prime factorization of 21952000 we follow 5 simple steps:\n1. We write number 21952000 above a 2-column table\n2. We divide 21952000 by the smallest possible prime factor\n3. We write down on the left side of the table the prime factor and next number to factorize on the ride side\n4. We continue to factor in this fashion (we deal with odd numbers by trying small prime factors)\n5. We continue until we reach 1 on the ride side of the table\n\n 21952000 prime factors number to factorize 2 10976000 2 5488000 2 2744000 2 1372000 2 686000 2 343000 2 171500 2 85750 2 42875 5 8575 5 1715 5 343 7 49 7 7 7 1\n\nFactors of 21952000 = 1×2×2×2×2×2×2×2×2×2×5×5×5×7×7×7= $1 × 2^9 × 5^3 × 7^3$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83677256,"math_prob":0.96776056,"size":1796,"snap":"2021-43-2021-49","text_gpt3_token_len":525,"char_repetition_ratio":0.31529018,"word_repetition_ratio":0.04057971,"special_character_ratio":0.43596882,"punctuation_ratio":0.033783782,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9754103,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T13:05:27Z\",\"WARC-Record-ID\":\"<urn:uuid:6e1a4c45-35aa-4b7d-a36d-31d641435019>\",\"Content-Length\":\"22913\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:45012871-e842-4e9f-93e5-6ed7ea00d1c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc0e7b72-3d03-4d4d-a207-8226f6bb4395>\",\"WARC-IP-Address\":\"51.91.60.1\",\"WARC-Target-URI\":\"https://www.geteasysolution.com/Factors-of-21952000\",\"WARC-Payload-Digest\":\"sha1:SMM57FUA7CDYO3XOFIUOGFXBNBTAWGFP\",\"WARC-Block-Digest\":\"sha1:55D56QCBNXUFXYDIE3FGTM6DIPHCT36O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363510.40_warc_CC-MAIN-20211208114112-20211208144112-00596.warc.gz\"}"}