URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://www.neetprep.com/question/5708-value-sf-Kp-Kp-reactionsXYZ-AB-ratio--degree-dissociation-X-equal-totalpressure-equilibrium---ratioa--b-c--d-?courseId=19 | [
"NEET Questions Solved\n\nThe value sf Kp1 and Kp2 for the reactions\n\nX$⇌$Y+Z ....(1)\n\nand A$⇌$2B ......(2)\n\nare in the ratio 9:1. If degree of dissociation of X and A be equal, then total pressure at equilibrium (1) and (2) are in the ratio:\n\n(a) 1:9 (b) 36:1\n\n(c) 1:1 (d) 3:1\n\n(b)\n\nFor X$⇌$ Y + Z for A $⇌$2B\n\n1 0 0 1 0\n\n1-$\\mathrm{\\alpha }$ $\\mathrm{\\alpha }$ $\\mathrm{\\alpha }$ 1-$\\mathrm{\\alpha }$ $\\mathrm{\\alpha }$\n\n..\n\n9 =\n\n..$\\frac{{P}_{\\mathit{1}}}{{P}_{\\mathit{2}}}$ = 36\n\nDifficulty Level:\n\n• 6%\n• 84%\n• 9%\n• 3%"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6876007,"math_prob":0.9998908,"size":316,"snap":"2019-26-2019-30","text_gpt3_token_len":134,"char_repetition_ratio":0.118589744,"word_repetition_ratio":0.0,"special_character_ratio":0.47468355,"punctuation_ratio":0.25252524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99974304,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T07:56:08Z\",\"WARC-Record-ID\":\"<urn:uuid:39048179-d5c3-4a18-ab21-e378a293c80e>\",\"Content-Length\":\"96362\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8838d379-94ea-482a-891f-ebf669383f97>\",\"WARC-Concurrent-To\":\"<urn:uuid:386b93dc-f473-42b4-9a04-745f0ab4303a>\",\"WARC-IP-Address\":\"54.186.44.90\",\"WARC-Target-URI\":\"https://www.neetprep.com/question/5708-value-sf-Kp-Kp-reactionsXYZ-AB-ratio--degree-dissociation-X-equal-totalpressure-equilibrium---ratioa--b-c--d-?courseId=19\",\"WARC-Payload-Digest\":\"sha1:MZARA5C5APL5P4PITG5UWSUVDELKJDT3\",\"WARC-Block-Digest\":\"sha1:PURND4DTWVHT6HPUGMLEYGB56ISJM7EG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999163.73_warc_CC-MAIN-20190620065141-20190620091141-00350.warc.gz\"}"} |
https://www.scirp.org/Journal/PaperInformation.aspx?PaperID=94656 | [
"An Earthquake Model Based on Fatigue Mechanism—A Tale of Earthquake Triad\n\nEarthquakes are the result of strain build-up from without and erosion from within faults. A generic co-seismic condition includes merely three angles representing respectively fault geometry, fault strength and the ratio of fault coupling to lithostatic load. Correspondingly, gravity fluctuation, bridging effect, and granular material production/distribution form the earthquake triad. As a dynamic component of the gravity field, groundwater fluctuation is the nexus among the three intervened components and plays a pivotal role in regulating major earthquake irregularity: reducing natural (dry) inter-seismic periods and lowering magnitudes. It may act mechanical-directly (MD) through super-imposing a seismogenic lateral stress field thus aiding plate-coupling from without; or mechanical-indirectly (MI) by enhancing fault fatigue, hence weakening the fault from within. A minimum requirement for a working earthquake prediction system is stipulated and implemented into a well-vetted numerical model. This fatigue mechanism based modeling system is an important supplement to the canonical frictional theory of tectonic earthquakes. For collisional systems (e.g., peri-Tibetan Plateau regions), MD mechanism dominates, because the orographically-induced spatially highly biased precipitation is effectively channeled into deeper depth by the prevalence of through-cut faults. Droughts elsewhere also are seismogenic but likely through MI effects. For example, ENSO, as the dominant player for regional precipitation, has strong influence on the gravity field over Andes. Major earthquakes, although bearing the same 4 - 7 years occurrence frequency as ENSO, have a significant hiatus, tracing gravity fluctuations. That granular channels left behind by seamounts foster major earthquakes further aver the relevance of MI over Andes. Similarly, the stability of the Cascadia fault is found remotely affected by Californian droughts (2011-15), which created a 0.15 kPa/km stress gradient along the Pacific range, which also is the wave guide.\n\nKEYWORDS\n\n1. Earthquakes as Frictional Phenomena and Earthquake Triad\n\nTectonic earthquakes are frictional phenomena between contacting plates (Scholz, 1998; Wang & Hu, 2006; Wang & He, 1999) . It is a game of gravity- (or more generally compressive stress-) aided friction resisting against the ever-increasing plate-coupling stress ( $\\tau$ ) resulted from differential plate motions. For an ideal configuration (Figure 1(a)) of uniform fault geometry (i.e., constant slope angle $\\theta$ and unlimited width in the transverse direction) and uniform material (so that maximum frictional coefficient ${\\mu }^{\\prime }=\\mathrm{tan}\\left({\\theta }_{f}\\right)$ being a constant), the co-seismic condition is\n\n${\\varphi }_{c}=\\theta +{\\theta }_{f}$ . (1)\n\nwhere repose angle $\\varphi =\\mathrm{arctan}\\left(\\tau /G\\right)$ , with G being compressive stress load on the shear (fault) zone. Subscript c means critical value. $\\theta$ is the slope of the fault zone, determined by the geometry of the plate interface, and ${\\theta }_{f}$ is the maximum static friction angle corresponding to fault strength. Values of ${\\theta }_{f}$ depend on wall rock material properties and lithologies. During inter-seismic stage, the repose angle increases steadily and approaches the sum of fault slope angle and the maximum static friction angle. Driving stress increase and the overall resistive stress decrease (weakening of the fault, or ${\\theta }_{f}$ decrease) both are seismogenesis (Scholz, 1998) . In reality, erosion from within and burden from without likely are simultaneous. Earthquake cycle is a stress adjustment cycle. Closely related to three angles in Equation (1) are the earthquake triads: Gravity fluctuation of the overriding plate, bridging effect and fatigue. With fault strength set as constant, fault geometry and plate-coupling stress determine a natural (limiting) earthquake occurrence frequency. However, actual occurrence seldom obeys this limit and usually occurs far ahead of schedule, due to various fault-weakening factors.\n\nSince displacements caused by each earthquake (ranging between 0.5 - 3 m) are at least 3 orders of magnitude smaller than the dimension of the seismically locked fault zone, which could well be several hundred kilometers, material in the locked fault zone is well-seasoned: all having experienced numerous times of wearing and tearing. For predicting near future earthquakes, the geological backgrounds, including material properties and fault geometry, can be safely assumed to be constant. Thus, the natural limiting occurrence frequency over a fault is rather stable. The irregularity in earthquake occurrence is primarily due to fault weakening mechanism. This study focuses on more dynamic fluctuations affecting the triad of earthquake. Following discussions use subduction fault as prototype. By substituting compressive stress for lithostatic loading, same reasoning is viable for strike-slip faults. Strictly, simulating an earthquake event includes estimates of its timing (Section 2), epicenter location and magnitude (Section 2.1). Actual full 3D numerical model, with detailed parameterization outlined (Section 3) is used in the simulations discussed in Section 4. In addition to the parameterizations of material rheology (visco-elastic mantle, granular debris as produced by seismic events of all kinds, and the brittle elastic crust), Section 3 also is dedicated to the setting of the fault-following model grids, static geologic properties (e.g., plate interface geometry and physical property of each layer of medium).\n\n2. Complexity of the Earthquake Problem: Under-Determination and Interactions of the Triad\n\nDifferent segments of a realistic heterogeneous fault zone (Figure 1(b)) have different slope angles and maximum friction coefficients and also experience different loads. The driving stress distributed to each segment approaches its maximum affordable resistive stress gradually, like the saturation process of porous media. Fault “stress saturation” is thus analogously defined here as the ratio of driving stress to maximum affordable resistive stress.\n\n${S}_{i}={\\tau }_{i}/\\left[{G}_{i}\\mathrm{tan}\\left({\\theta }_{i}+{\\theta }_{f,i}\\right)\\right]=\\mathrm{tan}\\left({\\varphi }_{i}\\right)/\\mathrm{tan}\\left({\\theta }_{i}+{\\theta }_{f,i}\\right)$ (2)\n\nConsequently, an S value of 0 corresponds to full locking while an S of 1 corresponds to creeping. The overall stability of a realistic fault becomes ${\\stackrel{¯}{S}}_{i}=\\left[\\underset{i}{\\sum }{\\tau }_{i}\\right]/\\left[\\underset{i}{\\sum }{G}_{i}\\left({{\\mu }^{\\prime }}_{i}+\\mathrm{tan}{\\theta }_{i}\\right)/\\left(1-{{\\mu }^{\\prime }}_{i}\\mathrm{tan}{\\theta }_{i}\\right)\\right]\\le 1$ . S is thus a useful diagnostic index for fault stability.\n\n2.1. Multi-Solution in Seismogenesis\n\nIn reality, earthquakes involving the saturation of the entire shear zone seldom occur, because actual tectonic plates have difficulty in maintaining rigidity over extended distance, manifested as the existence of through-cut faults and many small magnitude earthquakes within close proximity to each other. Consequently, many places do not contribute their shares of resistance and depend on neighbors to stay in place. Depending on how $\\tau$ is distributed on the fault zone, there can be various “legal” combinations of saturated and partially saturated segments along the fault zone. In reality, earthquake occurrence can be compounded by any factors affecting $\\tau$ , G, and ${\\theta }_{f}$ (or fault strength ${\\mu }^{\\prime }$ ) in Equation (2). Multi-solution in seismogenesis essentially is resulted from the lack of enough information as constraints on, or an under-determination issue of lateral interactions. Realizing that earthquakes, as a result of relative motion of plates involved, inevitably have footprints on the mass (Wallace et al., 2009; Ren et al., 2015) , energy (Gao & Wang, 2014) and momentum fields ( Wang et al., 2012 and references therein), efforts for earthquake prediction are increasingly based on observational facts of these physical parameters and their fields.\n\n(a)(b)(c)(d)\n\nFigure 1. Diagram of an ideal crust front/fault (a) and a plane view of a more realistic plate interface (b). Panel (a) is a highly idealized continental plate overlying oceanic plate configuration. Black bold arrows in (a) show the opposing oceanic plate and the overlain continental plate. Stress analysis is performed for an arbitrary point (P) at the shear zone. $\\mu$ is the static friction; G is the gross gravity of the overlain plate section; $\\tau$ is the driving stress resulted from tectonic motion. Static friction and gravity work together to contest against continental driving stress. Once the driving stress overcomes gravity-aided maximum static friction, a co-seismic sliding occurs. Red-toned color shades illustrate stress saturation. Granular material, often in the form of fault gouge, inter-laces the plate-interface (b). Either because geometrical irregularity of the contacting interface, or because the existence of density anomaly (e.g., cavity) above, there are places (“pillars” or strong contact regions) that take a disproportionally larger load than the neighboring regions (Bridging effect). Plates are “riveted” together by these “sticky” spots. These strong contact regions tend to wear down faster, through generating granular debris more actively. “Asperities” and “barriers” (Wallace et al., 2004) can both be the “pillars” defined here. Panel (c) shows the vertical resistive stress σzz fields around a region with significant bridging effects (i.e., with cavity caused gravity anomaly). Note that the under-belly of the overlain plate is in weak contact with lower plate and also is a region with active granular material production (that region experiences strong tensile stress). The fault creeps primarily by wearing geometrical irregularities or heterogeneous “rises/bumps/sticky spots”. The epicenter is the ensemble (weighted by frictional heat released at these regions) mean location of the sticky points being co-seismically unlocked. Real cavity usually occurs at much shallower depths. Fault interface underneath still feels the bridging effects by producing a corresponding uneven granular material generation pattern. Panel (d) is SEGMENT simulated global distribution of inter-pate granular material thickness (m).\n\nThe co-seismic release of potential energy is transformed mainly into heat through inter-plate frictional force. A large, deep-burying area approaching saturation simultaneously, un-locking systematically and releasing the stored elastic energy coherently generally produces large magnitude earthquakes (Wang & Bilek, 2011) . The contact surface of actual tectonic plates can have sophisticated topography and may involve material heterogeneity (Figure 1(b)). Therefore, it is difficult to predict the future rupture propagation without knowing the details at the interface. Recent studies (Gao & Wang, 2014; Wang et al., 2012; Wang & Bilek, 2011) indicate that widespread and smooth contact between two relatively weaker plates harbors large earthquakes—a situation usually satisfied at the megathrusts. As an opposite case of rough contact, the subducting seamounts generally discourage the generation and propagation of large ruptures. This is because the seamounts consume a very small fractional of the plate interface, acting more effectively as a grinder for the overlain plate rather than as a blocker (Figure 1(b); Wang & Bilek, 2011 ). Downstream of these seamounts, there are stripes of granular debris composed by the material scrapped off both the sea mounts and the overlain continental plates. These existing findings learn further support to the importance of lateral coherence among different segments of the fault in determining the magnitude of the earthquakes (Equation (2)).\n\n2.2. Intertwining Earthquake Triad\n\nOne important factor causing non-uniform distribution of driving stress among segments is the existence of macroscopic crevasses of voids/cavity within the plates (Rowe et al., 2012) . In reality, bridging effects (Ren, 2014a) —load directly above can be partially carried by neighboring segments (or vise-versa)—add yet another layer of complexity. Bridging effect contributes to gross resistive stress only when the fault interface has non-uniform friction coefficients. Otherwise, if the interface has same roughness ${\\mu }^{\\prime }$ , the net contribution to resistive stress from bridging effects may be minuscule due to the canceling effects from neighboring regions that is relieved portion of their lithostatic stress.\n\nFrom Equation (2), gravity of the overriding plate always serves as a stabilizing factor. The reduction of gravity (e.g., from extended droughts) contributes to instability. Fluctuations in gravity field not only have local stability consequence, they also affect neighboring regions through bridging effect and a unique fatigue process of the fault zone: granular material (GM, Jop et al., 2006; Ren et al., 2008 ) generation and transportation, a common fault-weakening mechanism. Production of granular debris is the primary form of fatigue of the fault interface. Regions of strong contact, likely being a result of bridging effect, encourage active granular material generation. Ironically, granular material has much smaller viscosity and acts as lubricant for the plates involved. Regions carrying more loading are now footed on slippery floor (i.e., more weight is loaded on smooth contacts and less is loaded on rough contacts along the shear zone). Thus, bridging effect, through affecting granular material generation and redistribution, influences the frictional properties of the fault interface and achieves a negative spatial covariance between loading and fault strengths, being seismogenic (i.e., contributing to fault instability) according to Equation (2) (i.e. reduction of the $\\underset{i}{\\sum }{\\tau }_{i}$ ). Sources of bridging effects are plenty. For example, large scale structures in the overriding plate such as mountains and valleys signifies bridging effects at depth on the seismogenic fault zone (Bejar-Pizarro et al., 2013) . Even for regions without topographic features, existence of cavities/crevasses at depth (not necessarily all the way to the locked interface; the vertical change in loading transfers readily to horizontal stress in the fault circumstances) causes non-negligible bridging effects on faults (to be further detailed in Section 3.1.1). Bridging effects from static geological background, although may cause variations in the frictional properties of the seismogenic zone, vary slowly and are not responsible for short term variations in major earthquake occurrence. Large scale variations in groundwater, however is more dynamic and is a viable candidate for explaining earthquake occurrence irregularity on decadal to centennial time scales. Cyclic forcing is most effective in causing fatigue (Ren & Leslie, 2014) . Cyclic groundwater fluctuations, especially when acting in concord with the resonance frequency of the plates, assist granular material generation.\n\nThus, the earthquake triad is interlinked and hence reinforces one another through groundwater fluctuation as the nexus. Groundwater-aided granular material generation (G3) is the mechanism modulating earthquake cycles. Detailed discussion on fatigue in rock interface, bridging effects and their physical parameterizations in the Scalable Extensible Geofluid Modeling framework for ENvironmenT Intelligence System (SEGMENT, Ren, 2014a; Ren et al., 2008; Ren et al., 2012 ) can be found in Section 3, together with the sources of input datasets. Through solving conservation equations of mass, momentum and energy, the SEGMENT provides 3D distribution of strain, stress, and full three components of velocity in the simulation domain. How a major earthquake is identified using these prognostics is described in Section 3, irrespective of the physical domain they reside (on the main thrust interpolate or not) and the morphology (e.g., megathrust earthquakes or simultaneous rupture of several strike-slip faults as the 2012 Indian Ocean earthquake). It is noteworthy that variables in Equations (1) and (2), such as the degree of stress saturation, are merely diagnostics of SEGMENT instead of being prognostics. Through these diagnostics, one can vividly view the stress build-up during the inter-seismic stage. Section 3 also includes detailed discussions of recondite approaches in obtaining present global distribution of granular material thickness and the future evolution.\n\n3. Earthquake Triad (3.1) and Their Representation in SEGMENT Modeling System (3.2)\n\nFault geometry, although being the far-largest factor in determining the timing of major earthquakes, are looked upon as geological background here. The super-imposed gravity fluctuations are primarily responsible for earthquake irregularity. Following discussion focuses on the bridging effect and the fault weakening consequences from granular material (GM) formation. Groundwater is not singled out as a subsection because it is the nexus among the components and it is natural to discuss it as an integrated component throughout the text. The following is a description of a minimum requirement for a working earthquake prediction system for realistic 3D fault configuration—the Groundwater aided Granular material Generation (G3) framework and its implemented in the well-vetted SEGMENT system.\n\n3.1.1. Bridging Effects at the Fault Zone Make Instability “Non-Local”\n\nThat different portions of the fault/shear zone contribute different resistive stresses primarily is due to uneven loading (the variations in Gi along the shear zone). There are many causes for loading deviation from the lithostatic values. For convenience, a generic term “bridging effects” (Figure 1(c); Ren, 2014a ) is assigned to this phenomenon. It is a disturbance to the vertical resistive stress field (RZ). The “G” in Equations (1) and (2) is a special case of RZ when bridging effect is insignificant. In principle, disturbances at any depth above the fault zone can be felt at the plate interface. It must be noted that, for a point source (of horizontal dimension much less than fault dimension), the influence range of bridging effect in each horizontal direction expands linearly. So the footprint increases at a square power of depth. Consequently, the amplitude of the disturbance decreases at a rate of depth-squared. Thus, the closer the disturbance source is to the shear zone, the more effective in superposing onto RZ and contributing to motion tendency of this segment of the shear zone. According to this reasoning, localized load disturbances such as man-made infrastructure (e.g., dams, high raisers) are far less important than a cavity of similar size existing at 20 km depth in disturbing the load distribution of a shear zone at ~30 km depth. The limiting size (L) of cavity at depth H depends on rock tensile strength (fc) and density ( $\\rho$ ) according to a square root law: $L=b\\sqrt{{f}_{c}H/\\rho g}$ , with b a cavity geometry dependent constant. It is apparent that larger dimension cavities are allowed at deeper depths of the brittle crust. At deeper depth (>~30 km), the thermodynamic reduction in tensile strength of rock dominates and causes drastic reduction in L. As a result, the cavities on the locked interface are of sub-meter scale at most (Ujjie et al., 2007; Cowan et al., 2003) . Thus, bridging effects from inter-plate geometry irregularity may be only non-negligible for regions surrounding sea mounts for earthquake evolution and lifecycles. These features have clear remote-sensing signature, especially when experiencing transition in size as a result of cave-in (usually a result of continued granular material formation). Loading irregularity is thus the most prevalent form of bridging effects. Load fluctuations of shallower level, only when occurring at large enough spatial scale, affect fault stability. In this sense, regional scale groundwater fluctuations, even only resides in upper couple kilometers depth, may have significant impact on fault stability. The deeper they reach (such as around the through-cut faults in the Longmenshan Fault system (to be detailed in the discussion section) the more direct and efficient they are in affecting fault stability.\n\nThe bridging effect contributes to gross resistive stress only when the plate interface has non-uniform frictional properties. To have a net effect on resistive stress, the extra loading should be negatively coupled with the spatial variation of fault strength (i.e., their spatial co-variance being negative). Granular material (Jop et al., 2006; Ren et al., 2008) formation and transportation is such a nexus. Through bridging effects, neighbor’s weight is partially transferred over regions with strong contacts (like “pillars” of a bridge). There may not be any macroscopic voids; by “pillars” it is meant only of strong compressive stress region, or strong contact regions. These regions of strong contact encourage granular material formation (a consensus of yielding criteria of brittle rocks; Rowe et al., 2012 ). With the formation of granular material, which has viscosity (<104 Pa S) many orders of magnitude smaller than polycrystalline rocks (>1021 Pa S), the shear zone with strong contacts weakens and this is seismogenic. For example, if only 5 mm granular material is put between rock plates under the confining pressure similar to the locked fault zone, the equivalent frictional coefficient, or fault strength drops to <10−4, orders of magnitude smaller than solid-against-solid rock friction.\n\nSea mounts are a type of inter-plate “pillars/asperities” that grind upper plate and also wear out themselves. “Bridging effect” actually is the controlling factor for granular material generation. Bridging effect caused co-seismic and interseismic stress irregularities has strain/deformation consequences at all depths up to the earth surface, where the near surface displacements (directly associated with strain buildup and release) can now be accurately measured by Global Positioning System (GPS; Chlieh et al., 2011; Sun et al., 2014; Loveless & Mead, 2010 ) and Synthetic Aperture Radar interferometry (InSAR; e.g., Weston et al., 2012; Cavalie & Jonsson, 2014 and references therein) instruments. From Equation (2), it contributes to stability if more weight is loaded on rough contacts and less is loaded on smooth contacts along the shear zone. Bridging effect, through affecting granular material production rates, influences the frictional properties of the fault interface and achieves the seismogenic effect of “more weight is loaded on smooth contacts and less is loaded on rough contacts along the shear zone”. Sources of bridging effects are plenty. For example, large scale structures in the overriding plate such as mountains and valleys signify bridging effects at depth on the seismogenic fault zone (Bejar-Pizarro et al., 2013) . However, topographic features caused bridging effects and hence variations in the frictional properties of the seismogenic zone cannot explain the irregularity of earthquakes that occur on decadal to centennial scales. Large scale variations in groundwater, however is a viable candidate. Thus, fluctuations in gravity field not only have local stability consequence (Equation (2)), they also affect neighboring regions (laterally) through bridging effect and the unique fatigue process of the fault zone.\n\n3.1.2. Fatigue in Rock Interface—Generation of Granular Material\n\nTwo objects, when being pressed against each other, wear and tear at the interface, a manifestation of material fatigue. The interfaces of tectonic plates are of no exception. One proof is the existence of widespread fault gouge and pseudotachylyte melt in wall rocks (Rowe et al., 2012) . Seismic activities, irrespective magnitudes, produce granular material. The generation rate of GM however is a functional of at least three factors: 1) the degree of imbalance of the three principle stress components, in a form of yielding criteria for brittle material; 2) the existing amount of granular material or the current state of GM; and 3) forming granular material consumes elastic energy accumulated in the plates, through forming new surfaces under huge confining pressure of the fault environment. Point (1) is supported by recent findings of frictional melt along the fault surface. It happens at only a small fraction of the total rupture surface area but always begins at asperities, the effectively strongest points on the fault surface. Cyclic forcing is most effective in causing fatigue. Groundwater fluctuations, especially when acting in concord with the resonance frequency of the plate segment, enhance granular material generation. The second requirement is because, as granular material builds up at the interface, it hinders further production of granular material in exactly the same way accumulated saw dust hinders further cutting of a piece of wood. This is especially relevant for the inter-plate granular material because, compared with its parent rock it has larger porosity and easily fills up the limited inter-plate voids. Earthquakes are such an effective mechanism for removing and re-distributing existent granular material, specifically by thermal pressurization or acoustic fluidization. Granular material production and redistribution achieve a negative spatial covariance between loading and effective frictional coefficients, being seismogenic according to Equation (2).\n\nIn addition to the critical role played in granular material generation, groundwater, as an extra loading on the overriding plate, is by itself a stabilizing factor, according to Equation (2). Reduction of groundwater, usually as a result of extended wide-spread droughts, is however seismogenic. Although the weight of groundwater itself is negligible compared with the weight of the overriding plate, it is of the same order of magnitude as the residual stress (between plate-coupling compressive stress and gravity-aided friction). That is, the inter-seismic stage is actually in a very tricky balance (Wallace et al., 2017) . Fluctuation of groundwater, through superimposing a lateral/horizontal stress gradient, may play a critical role in the timing of major earthquakes. Fortuitously, fluctuations in groundwater can now be remotely sensed by gravity satellites such as the Gravity Recovery and Climate Experiment (GRACE; Famiglietti & Rodell, 2013; Ren, 2014b; Voss et al., 2013 ) and by InSAR measurements (Liu et al., 2017) .\n\nCentered on the master co-seismic criterion (Equation (1)), the triad determining earthquake irregularity is discussed. It is now clear that, because of the heterogeneity in driving stress distribution, and the overlain loading condition, unlocking is often not completed at once. Rather, different geological locations may unlock in certain orders. The ensemble center of the ruptures is the epicenter and the magnitude is directly related to the rupture size. Geological backgrounds such as fault geometry and the plate motion speeds evolve rather slowly. The irregularity in earthquake occurrence mainly is resulted from the inter-plate granular material production and transportation. Large spatial scale fluctuations in groundwater (i.e., those arising from large scale droughts and floods), through bridging effects, are a suitable mechanism in granular material generation. In a certain sense, the climate fluctuations, through affecting groundwater loading pattern, affect the earthquake cycles. Bridging effects by themselves may not cause fault weakening. But with the associated granular material generation, they weaken faults through forming a negative spatial covariance pattern between loads and friction coefficients. The generation and transportation of granular material at the plate contact reconcile above-mentioned counter-intuitive characters of major earthquakes (e.g., as enumerated in Wang (2015) ) and also are critical in orchestra of small-scale ruptures to generate major earthquakes.\n\nThis study focuses on subduction zones (e.g., Andes and Cascadia) because they are responsible for the largest earthquakes and also because they have significant footprints on the gravity (mass) and thermal (Gao & Wang, 2014) fields. The continental collisions (continental against continental, e.g., Himalayas) here are not differentiated from subductions (oceanic against continental plates). Despite the improved measurements of the Earth’s structure and rupture geometry, the timing of major earthquakes remains a scientific puzzle (Pritchard & Simons, 2006; Sawai et al., 2004) . In this research, first order schemes representing earthquake triad are implemented in a sophisticated 3D modeling system, to estimate the spatio-temporal scales of major earthquakes.\n\n3.2. Numerical Model Setup, Parameterization of the Earthquake Triads, and Data Acquisition\n\nAbove discussed theoretical concepts, or the Groundwater aided Granular material Generation—G3 hypothesis is implemented into a global spherical crown version of the SEGMENT, with the stress decomposition into driving stress and resistive stress follows the convention of Van der Veen and Whillans (1989) . The SEGMENT solves the equations of mass, momentum and energy conservation in a 3D spherical geometry. The model prognostics include three components of velocity, temperature, and six components of the deviotoric stresses. Their evolution is depicted by the time marching of the governing equations. Variables in Equations (1) and (2), such as the degree of stress saturation, are just diagnostics of SEGMENT instead of being prognostics. Through these diagnostics, one can vividly view the stress build-up during the inter-seismic stage. A major earthquake in model is identified as more than 104 km2 adjacent model grids in the horizontal dimension possess macroscopic motion velocities. Once an event is identified, the released energy is estimated within a time window fully covers the time period with macroscopic motion. An epicenter is diagnosed as the geological locations of the saturated grids, weighted by the respective energy release. Degree of stress saturation S is not directly used to identify earthquakes also because the grid elements are not isolated; there are interactions among adjacent grids. If the unlocking/rupture at a grid point releases a large amount of energy, a neighboring grid point, even still quite away from saturation, can be unlocked and set into motion. In fact, some very large earthquakes are formed because there are some sections of the fault (not necessarily adjacent to each other) get simultaneously ruptured and the released energy triggered neighboring sections, resulting in an upward spiral that causes a domino effect in propagating away of the rupture region.\n\n3.2.1. Grid Stencil of SEGMENT-Fault Zone Oriented Coordinate System\n\nFor this study, the original 3D spherical model of the upper ~4 km medium is now applied into a 500 km thick lithospheric material (including crust and upper mantle). The horizontal resolution is generally 30 km at the earth surface and refined to 2 km at subduction zones (Figure 2). Vertical resolution varies as 101 stretched vertical layers are used to represent the 500 km depth. The stretching scheme is used so that the shear zone gets better represented. In setting up grids in the model simulation domain, special attention is paid so that the upper surfaces of the oceanic plates are locally parallel to the model grids spanned surface (there always is a vertical level that has surface parallel to the oceanic plate’s upper surface).\n\n3.2.2. Static Geological Parameters and Viscosity Parameterizations\n\nLike other 3D tectonic models (e.g. subduction zone models), an accurate representation of the fault geometry (e.g., interface geometry such as subduction thrust geometry). This requires high resolution geodic and seismic data. The CRUST1.0 (Laske et al., 2013) is used to set up the density values over the grids within the crust layer (oceanic and continental crusts). Underneath bottom of the crust layer (shown in CRUST 1.0) is assumed to be upper mantle material (silicates) and given a uniform viscosity of ${\\eta }_{0}=\\nu ={10}^{19}\\text{\\hspace{0.17em}}\\text{Pa}\\cdot \\text{S}$ and a density of 3.3 × 103 kg/m3 as base values and a pressure- and temperature-dependent modifier (to be detailed very soon). In setting up the grids above present sea level, the Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER) global digital elevation map (SATER DEM, asterweb.jpl.nasa.gov/) and ETOPO1 (downloadable from https://www.ngdc.noaa.gov/mgg/global/global.html) bathymetry are referenced to. For faults with intensive gravity measurements, for example the Longmenshan Fault Zone (LFZ) of the eastern margin of the Tibetan Plateau, there was a detailed gravity survey after the Mw7.9 Wenchuan earthquake on 12 May 2008. Finer resolution data on elastic plate thickness, density and rigidity profiles are available and used in replacement of the CRUST1.0-derived parameters. Similarly, the Andes survey (Andes Geological Survey, 2006) and data pointed out by references therein provided information also is merged into the coarse resolution CRUST 1.0. For the Hikurangi subduction zone (New Zealand), the revised interface geometry (Williams et al., 2013) are used instead of the CRUST 1.0. For the Puysegur trench, we adapted the geometry data from Liu and Bird (2002) . For the Cascadia, Mexico, and Japan Trench, the higher resolution interface geometry are obtained from Gao and Wang (2017) . Personal communication with Z. Liu also provided higher resolution geometry data along the west frontal range of N. America. Other regions we have to tolerate the lower resolution interface geometry provided by SLAB 1.0 (Hayes et al., 2012) .\n\nBased on the fact that the co-seismic ruptures consume only a portion of the plate interface (Lay, 2015) to the center and the down-dip section creeps, the upper and lower portions of the plate should be treated respectively as elastic\n\nFigure 2. Horizontally and vertically stretched (so that the fault region and especially the plate interface zone is better represented) grids in SEGMENT, as delineated by black vertical lines and horizontal curves. The “locked” region gets the finest spatial resolution. In the corresponding calculational space, two shape factors (i.e., Jacobian operators) are involved in the governing equations. Background map showing subduction fault geometry is adapted from Wang (2015) .\n\nand visco-elastic material. Grids in the upper crust are considered elastic and elastic moduli are specified using canonical values. Grids in the lower crust and upper mantle away from the shear zone are considered to be visco-elastic (Wang et al., 2012) but getting more rigid upwards intending a smooth transition in the vertical domain. This is achieved by considering the thermal structure (Gao & Wang, 2014) in the parameterization of material viscosity\n\n$\\eta \\left(T,P\\right)={\\eta }_{0}D\\mathrm{exp}\\left[\\frac{Q+PV}{RT}\\right]$ . Here P is confining pressure, T is absolute\n\ntemperature, and D depends on the rock composition, grain size and fluid content. Here Arrhenius’ activation energy Q, specific volume V and universal gas constant R were assumed constants in the model. In response to different stress conditions, wall rocks can be compressed or stretched within a narrow range before onset of brittle fatigue. Arrays are allocated to record the amount of granular material formed at each grid cube. The shear zone is a granular zone of variable thickness (Figure 1(d)). In most cases, it is only a small friction of the shear zone layer that is granular material (also recorded in arrays). The parameterization of granular material viscosity follows Jop et al. (2006) , as implemented in SEGMENT (Ren et al., 2008) . The shear-thinning viscosity structure (Equation 8 in Ren et al., 2008 ) implies a shear-localizing effect from granular material accumulation. The present amount of granular material (Figure 1(d)) is deduced from surface velocity using the method in Ren et al. (2012) . Future variation of granular material is diagnosed from Equation (3) (described in next subsection on granular material generation; immediately follows). The granular ensemble size is deduced from its parent rock type and the associated grain sizes. In addition to the brittle interface of oceanic and continental crusts, GM also exist in the underbelly of the overlying continental crust just updip of the mantle wedge corner, a byproduct of (forearc) mantle wedge serpentinization (e.g., Hyndman et al., 2015; Audet & Burgmann, 2014 ). The GM material amount in this zone can be estimated from the clustering of episodic tremor and accompanying slow slip events and their magnitudes. Re-mapping of the grid stencils are automatically performed only if granular material accumulated exceeds 20 m or earthquake caused displacement over 20 m.\n\nSetup of the lower boundary stress conditions is critical for the simulation of stress buildup inside the simulation domain. The GPS measurements of plate motion speeds $V$ are used to fine tune the parameters of upper mantle layers so that lateral stress (basal drag) exerted on the lowest model layer at the active zones (mantle driven plate motion) and passive zones (plate driven mantle motion) nearly balance. From the vertical profile of the horizontal velocity components, it is clear that for most regions, the mantle motion is passively forced by the overlain plates and only in the active thermal hot spots (for Cascadia, the Yellowstone area), the boundary layer flow structure indicates that it is the mantle flow that exerts a strong drag to the overlain plate. Reflected in the flow structure, mantle flow speeds increase away from the lithosphere-asthenosphere boundary (LAB) rather than being a maximum at the LAB. Therefore, the driving stress comes primarily from lateral stress from neighboring grids within the lithosphere, rather than from the local basal drag from mantle (ultimately it is from the convective mantle but local to subduction zones it is not). The upper boundary condition is assumed to be stress-free (i.e., $\\tau \\cdot n=0$ at upper surface). The model spin-up from ad hoc initial conditions is aided by advanced data assimilation schemes using the remotely sensed plate motion, gravity (weather signal pre-filtered) signal and thermal flux measurements available. Accurate flow field is critical for an accurate estimation of the stress build-up between plates. Figure 3 shows present plate motion velocities at two vertical levels: 0 - 50 km and 100 - 250 km depth averaged. Compared with GPS measurements, globally, errors in meridional velocity is <0.02 mm/yr and in longitudinal direction is less than 0.03 mm/yr. For the interested Andes region, the error is slightly larger than global average (0.035 and 0.03 mm/yr). Over the Cascadia region, however, the errors are smaller than global average (0.027 and 0.015 mm/yr respectively). Based on the optimized solution, further experiments are performed on the sensitivity of groundwater perturbation and the dynamic evolution of the fault strength and the associated earthquake cycles. Although the governing thermos-dynamic equations of SEGMENT are very similar to those of the Advanced Solver for Problems in Earth’s ConvecTion (ASPECT, Kronbichler et al. (2012) ), it should be noted that the ASPECT model, due to its intended research purpose, has coarse resolution and cannot sufficiently represent surface topography. In fact, surface topography-induced creeping is a significant component for GPS sites in mountainous regions and on islands. This results in discrepancies in model top-level velocity and GPS measurements. Through skilled numerical setting, SEGMENT sidestepped this issue and significantly improved the velocity simulation of the near surface (<100 km depth) layers, where the stress build-up is critical for earthquakes (Figure 3(a)).\n\nFigure 3. Present day near surface (a) and deep (b) level plate motion velocities (blue arrows, in m/yr). GPS measurements (black and red (for regions of interest only) arrows) are from ftp://data-out.unavco.org/pub/products/velocity/pbo.final_nam08.vel (for North American Plate) and http://sideshow.jpl.nasa.gov/post/series.html (for global coverage). In Panel (b), resonance frequencies of global tectonic plates at the interfaces are labeled (years in red font). Hollow red arrows indicate the rupture tendency for the upcoming major earthquakes that are temporally in sequence. This also is the relocating direction of their epicenters.\n\n3.2.3. Groundwater Time Series\n\nFrom Equations (1) and (2), groundwater, through affecting the gravity field, contribute to fault instability. Its spatial fluctuation, through enhancing granular material generation, also weakens the fault from within (Section 3.1.2). To obtain the groundwater fluctuations over the past 15 years (2000-2015), a continuous time marching of the land surface scheme is performed with 30-min time step. Atmospheric parameters such as near surface air temperature, humidity, pressure, winds, and radiative components are from NCEP/NCAR reanalyses, and precipitation information from the Global Historical Climatology Network (GHCN, 1900-2015), GPCP, and The Tropical Rainfall Measuring Mission (TRMM, 1998-2015) are taken as input during the observational periods to drive the land surface scheme in SEGMENT to diagnose how much precipitation is percolated into ground water reservoir. For the recent decade (2003 onward), GRACE measurements provide a reliable means for the groundwater recharge/discharge. Model simulations are satisfactory when verified against GRACE measurements. GRACE is of very coarse resolution and model can provide spatial distribution of groundwater at much finer resolution. Obtaining groundwater distribution in a warmer climate is challenging. A small ensemble of climate models (27 climate models listed in Table 1) and an innovative approach in estimating extreme precipitation (Ren, 2014a) is used for the prediction of future occurrence of earthquakes. The partition into surface runoff, evapotranspiration and ground water percolation also is performed within SEGMENT.\n\nTable 1. Twenty seven GCM models used in this study to provide atmospheric forcing parameters to SEGMENT.\n\n3.2.4. Granular Material Generation and Transportation\n\nIn analogous to the state equation in Scholz (1998) , the following force-restore relationship is proposed here for granular material generation:\n\n$\\frac{\\partial a}{\\partial t}=-{c}_{1}a+{c}_{2}\\delta \\Delta E+ADV$ (3)\n\nwhere a is the thickness of the granularized rock layer, starting from 0, t is time, c1 is the e-fold time scale, and c2 is the Paris coefficient (Timoshenko & Gere, 1963) . Apparently, c1 signifies the negative feedback as granular material accumulates. The unit surficial energy density of rocks, J (<300 mJ/m2, Gilman (1960) ), is inverse proportional to c2. To form new surfaces at high confining pressure (~0.1 GPa), the energy need far surpasses the surficial energy. The c2 thus also is inversely proportional to confining pressure. Thermal effects also are included in c2. The coefficient $\\delta$ is 0 when the stress intensity change falls within the elastic range of the parent rock and is 1 when exceeding the elasticity range. $\\Delta E$ is the energy input rate when stress perturbation caused imbalance in principal stress exceeds the elastic range. External (e.g. groundwater fluctuation) tides with resonance frequencies are most effective in transferring distortion energy into the plates to cause fatigue and influence the granular material production rates.\n\n$\\Delta E={b}_{1}{\\text{e}}^{{b}_{0}|\\nu -{\\nu }_{p}|}{\\left(\\Delta K\\right)}^{4}$ (4)\n\nwhere the Paris coefficient ${b}_{1}$ depends on material property such as unit surficial energy and porosity of fault rocks. The coefficient ${b}_{0}$ is a negative number indicating the exponential damping of the groundwater tidal wearing when it is not synchronized with the natural frequency of the crust plate, $\\nu$ is the groundwater variation frequency, ${\\nu }_{p}$ is the resonance frequency of the crust plate, and $\\Delta K$ is the range of stress intensity change (proportional to the groundwater variations’ amplitude squared). From 2003 onward, GRACE measurements are available for improved SEGMENT estimates of the groundwater fluctuation amplitude. Plates at their interface assume complex configurations and determining the resonance frequency ${\\nu }_{p}$ is achieved by solving the deflecting equation, with realistic plate configurations and distributed density and temperature from local geology maps and recent observational studies (e.g., the Andes Survey 2006). The estimated inherit resonant frequencies are labeled in Figure 4. The framework of Equations (3) and (4) also is applicable to the silica granules “floating” above the mantle wedge corner. In that circumstances, the $\\Delta E$ is parameterized as thermal control: a thermal switch is designed as a function of the temperature gradients around the mantle wedge and the plate dipped in. In that manner, GM generation is shut-off for cold-slab subduction zones such as the Japan Trench and Hikurangi; whereas being active around warm-slab subduction zones such as the Mexico and Nankai subduction zones.\n\nFigure 4. SEGMENT diagnosed time scales of plate resonance frequency to groundwater fluctuation forcing (numbers in red). Background map was obtained from sideshow.jpl.nasa.gov/mbh/series.html and has been cleared for public release.\n\nThe inter-plate granular material chute system, as left behind by seamounts, removes granular deposits through advection process (ADV term in Equation (3)) exactly the same way as the surface runoff moves debris (Dietrich & Perron 2006) . Transportation of granular material is left to the Navier-Stokes solver of SEGMENT. Although granular materials are ubiquitous between contacting plates, under the huge confining pressure, the mean free path, in analogous to Prandtl mixing length in fluid dynamics, of the granular particles are very limited for most places lacking the parallel granular stripes left behind by seamounts or strong-contact “asperities” (Figure 1(b)). In the absence of macroscopic tunnels between plates, granular material redistribution resembles the diffusion of heat: moving along the potential gradient (included in coefficient c1). Earthquakes, primarily through p-wave shaking, increase the mean free path of the granular particle and make it more like fluids, and facilitate its downstream dispatch. Advection dominates the co-seismic down-dip transport and diffusion processes works steadily during the inter-seismic period. Earthquakes also are the mechanism shedding off the “saw dust” and facilitating generation of new granular material. Granular material’s formation is a fatigue/damage process (Bercovici & Ricard, 2014) . As cyclic variations in forcing favors fatigue (first two terms on the RHS of Equation (3)), especially when the forcing frequency is in resonance with the segment of the plate. In this sense, groundwater fluctuation, which possesses frequency on annual to multi-decadal scales, likely in concord with the fatigue of the inter-plate strong contacts. Thus, large spatial scale droughts and floods contribute to the irregularity of earthquake cycle.\n\nAlthough in principle healing effects can be included into c1, we did not involve the healing mechanism in Equation (3) because it is only applied in the brittle crust zone near subduction zones, where temperature are lower due to the double insulation from oceanic and continental plates. The best way to heal a fatigue, the phase changes (e.g., melt) are not relevant here. The released elastic energy during an earthquake is primarily dissipated as heat at the interface. The remainder propagates away as P and S-wave energy. Earthquake involves long distance displacements may cause melt and partially heal the fatigues interface (Rowe et al., 2012) . For the inter-seismic periods, Equation (3) is safely applicable. Clarifying these points about granular material also helps as numerical modeling of faults are now feasible with super-computers and more detailed observations (e.g., INSAR and GPS measurements of displacements; GRACE measurements of mass changes and numerous ways of thermal heat fluxes).\n\nTransportation of granular material is left to the Navier-Stokes solver of SEGMENT. The motion of the granular material inside the chute system at the plate interface is primarily low Reynolds number (Re) laminar flow. The gravity potential associated with the self-weight of granular material is negligible compared with its dominance at the earth surface flow systems. Conversely, confining pressure, the granular viscosity, and the interaction of granular with the channel boundary become important. These forces cause a passive pumping of granular material in the inter-plate chute systems. The thermal process also is simplified by the laminar flow because convective mixing in regular free fluids are non-existent and only diffusion kinetics are dominant. In many aspects, it is similar to micro-fluids in the capillary pipes. Confining pressure becomes the driver in guiding the flow direction.\n\nAs in Figure 1(c), the strong contact regions have the most active granular material generation. Its existence in the locking zone assists the unlocking because the shear resistance granular material can provide at the strain rate of the locking stage is equivalent to a ${\\mu }^{\\prime }$ value of 10−3, four orders of magnitudes smaller than solid-solid contact, representing a lubricant between the two elastic plates. The bridging effects by themselves may not cause fault weakening. But with the associated granular material generation, they definitely weaken faults through forming a negative spatial covariance between loading and friction coefficients. During the past two decades, it is being gradually realized that the maximum apparent frictional coefficient decreases with time (Scholz, 1998) . Granular material formation is one feasible explanation for slide weakening. The generation and transport of granular material at the plate contact can reconcile many counter-intuitive characteristics of major earthquakes ( Wang, 2015 and references therein) and also is critical in organizing small-scale ruptures to generate major earthquakes. Proper treatment of present granular material (Figure 1(d)) between plates also may provide a time frame for the next major earthquake on the same fault. From Figure 1(d), it seems, globally, the granular material thickness distribution is positively correlated with major earthquakes’ occurrence frequency. Following sections are discussion of earthquakes over two selected regions, as actual applications of the numerical models.\n\n4. Results\n\nThere are three interlaced mechanisms working together to make the earthquake prediction tricky for existing rubrics. The mechanisms we identified after sedulous essays using the singular SEGMENT model and remote sensing data are whetted against 73 historical cases, all within the reanalyses period with quality precipitation data. The reproduced earthquakes are viable, especially for cases after 2003, the starting point of the GRACE measurements, and before 2015, the aging of the current GRACE satellites. Basic model background verifications are available in Section 3, including simulated surface velocities against GPS measurements. Following sub-sections summarize model verification and simulations over three representative regions where groundwater fluctuations play significant role in affecting occurrence of major earthquakes. Although this study attempts a projection of future major earthquakes, the results cannot be regarded too literally. This is because major earthquakes are rare events and the existing information suitable for training the dynamical model still is scarce. Consequently, many ad-hoc assumptions are still involved in the parameterizations and interpretation of the input data, making the predictions of spatial and temporal of the future occurrences only of reference values.\n\n4.1. Model Verification\n\nBy simulating the time scale of the evolution of strain and stress in the fault zone, using SEGMENT, with appropriate parameter setting, assisted by remote sensing information, the timing and location of major earthquakes (Mw > 5 globally) can be estimated, with uncertainties in timing reduced down to a year and location error within 120 km (root mean squared error for historical records during 1979-2015). The primary limitation on the prediction accuracy resides with the resolution of CRUST-1.0 model, depending on which SEGMENT resolves fault geometry and composing material properties. As with all prediction system, the uncertainty grows exponentially with forecasting time. In the following discussion, the uncertainty level will be presented as necessary.\n\nAlthough global results are obtained, the following discussion focuses on three regions: The Tibetan Plateau and surrounding regions (TP), the Cascadia, and the Andes subduction faults. All three regions are adequately covered by GPS measurements and have large enough land areas that can be sensed by GRACE for the mass fluctuations around the faults. Primarily from different fault geometries, plate opposing motion speeds, and the distribution patterns of inter-plate granular material, earthquakes over the selected three regions are of various morphologies. Major earthquakes over the Andes and the TP refer respectively to those with magnitude > Mw7.5 and >Mw5.5. Without explicit statements, >Mw5 over other world regions is regarded as major earthquakes.\n\n4.2. Earthquake Activity in the TP Region: Correlations with Groundwater Fluctuations\n\nLarge scale (e.g., thousand kilometers) groundwater fluctuations exert extra stress on fault interface. Almost all major (>Mw5.5) earthquakes during 2003-2016 over the Tibetan Plateau occurred during low groundwater phases (figure not shown). Although the fluctuation of groundwater is secondary to plate motion in causing stress build up, it is an “extra”, and usually “the last straw that breaks the camel”. Analyzing the stress field clearly indicates that once two plates are coupled together, the underlain plate drags the lighter overriding plate downward and they go in tandem deeper into the Earth, pushing away the higher density upper mantle material. This causes mass loss around the fault region. This process, although only causing mass loss at a rate two orders of magnitude smaller than groundwater discharge, is however aided when the region is experiencing mass loss due to groundwater reduction (e.g., when evaporation steadily exceeds precipitation during extended droughts). The recent Nepal earthquakes of April and early May, 2015, and a third earthquake in Burma a year later all resides inside the mass loss region. The temporal sequence of the three major earthquakes and the ruptures’ spatial pattern (starting from northwest and extending to southeast) can be explained by the granular material production and transportation condition as proposed here. They represent key stages of strain energy release in the lateral direction, along the transverse direction of the fault by unlocking the granular sticky spots scattered over the plate interface. The order of unlocking strictly followed the granular material accumulation on the shear zone.\n\nThe three epicenter-consisting patches (consisting the three respective epicenters) were all about to approach saturation by April 2015, after ~72-year’s stress accumulation (Bollinger et al., 2014; Men & Zhao, 2016) . The reason the west-most one ruptured first is because the aiding of groundwater reduction. Based on SEGMENT simulation, the groundwater recharge during the past 80 years has a decreasing trend over the Bay of Bengal region (74-104˚E; 15-35˚N as the region of interest), agree with the monsoonal precipitation trends as identified in Wang and Ding (2006) . On average, the groundwater reduction rate is ~31 Gt/yr. The super-imposed stress field produces ~85 km2 saturated area (as defined in Equation (2)) annually. The matured area is not spatially contiguous on the fault interface. Rather they scatter sporadically over the inter-seismically locked interface. Connection of these sporadically scattered saturated patches is critical for generating earthquakes. For example, the occurrence of the first earthquake (April 25, 2015; M7.8; epicenter at 28.147˚N, 84.708˚E) redistributed the granular material due underneath and re-assumes a stronger rock-rock contact. This lessens the degree of saturation of the second major earthquake (occurred on May 12, 2015 of M7.3 magnitude and epicenter located 200 km to its southeast at 27.973˚N, 85.963˚E) region and it took another month for its occurrence. The earthquakes over Burma (Mw6.9 on April 12, 2016 at 23.13˚N, 94.9˚E and Mw6.8 on August 25, 2016 at 20.9˚N, 94.6˚E), further to the southeast, were not directly related to these two events. They were a result of the lightening of the Bay of Bengal region, to be further addressed later. Unlike some recent studies (e.g., Men & Zhao, 2016 ) that predicted the future earthquake (after the Nepal earthquake 2015) would be to the north-west sector, SEGMENT simulation indicates that the sector to the south-east also is becoming unstable. Actually the maturation pattern is a jumping between the eastern sector and the western sector, with cluster centers (30˚N, 84.4˚E) and (27.5˚N; 86.2˚E). Consequently, major earthquakes also occur alternatively (between these two regions in a bi-polar mode).\n\nThe Longmenshan fault zone (LFZ, 102-106˚E; 30-33˚N) is a vivid incarnation of the “non-unique solution” configuration as presented in Section 2. The LFZ defines the eastern margin of the Tibetan Plateau. Very recent synthetic approach (gravity survey, seismic reflection profile and earthquake focal mechanism data, and sediment analysis; personal communication with Z-X. Li, 2016) indicated that the LFZ is roughly composed of two segments of variant formations. The central-northern section, formed ~40 Myr ago (Jiang & Li, 2014; Richardson et al., 2008) , is a lithospheric through-cut fault zone that possesses low elastic strength but with strong dextral strike-slip motions. In contrast, the southern sector is a crustal thrust zone dominated by shallow-angle thrust motion of the TP over the moderately stiff South China Plate. This Mobius-twisted fault plane and contrasting behaviors along the LFZ, in addition to providing kinematically favorable conditions for instability, have also profound influence on groundwater reservoir. The Mw7.8 Wenchuan earthquake further lent evidence of the importance of gravity fields in fostering major earthquakes.\n\nExamining the mass change fields averaged between 2003 and 2008, it is clear that Wenchuan resides at the center of a saddle region (Figure 3 in Ren et al. (2015) ). For earthquakes fostered along the same fault (but temporally consecutive), the tectonic plate motion caused compressing usually invokes very similar saturation patterns. The primary reason for the randomness of epicenters is due to more transient perturbations such as groundwater fluctuation. The super imposed field cause a specific point to reach saturation first and propagate away to cause a large rupture area. The rupture can grow because, to seize the motion of the unstable plates, neighboring unsaturated regions may be activated. If the propagation encountered too many unsaturated regions, the momentum is quickly lost and this strongly limits the magnitude of the earthquake. On the other hand, if the propagation starts from a region surrounded by a ripen neighborhood, a positive feedback forms and resulted into a large magnitude earthquake. By initiating the rupture near Wenchuan, a more efficient energy release is achieved. From SEGMENT simulation with future precipitation scenarios under A1B scenario provided by a small-volume climate model ensemble (as listed in Table 1), the next major earthquake will be within Ganshu province (epicenter located along 104.8˚E longitudinal arc between 33 and 35˚N) to the north east, ~120 ± 45 years later of magnitude ~Mw7. For the far-larger TP region, the India-Eurasin collisional system, while thickening the elastic lithosphere, created numerous vertically extended crevasses. These crevasses are very effective in channeling groundwater into depth. Combined with TP’s orographic effects, precipitation variation transfers readily into extra lateral stress on the faults. Therefore, groundwater fluctuation becomes a dynamic factor affecting the epicenter location and the magnitude of major earthquakes. Because the persistent gravity reduction in the Bay of Bengal area and Southwest China, coupled with insignificant changes over the plateau and the northern Tarim and very arid Qaidam basins, a saddle-shape super-imposed stress field is formed and maintained over the past half century. The natural, geological background-deduced recurrence frequency (“dry” frequency) is greatly reduced over the region, especially over the Southeast Asia peninsula. The fault zone over Burma likely becomes unstable in about two years. Very interestingly, the epicenters of the upcoming earthquakes, affected by the weakening of monsoonal precipitation, are located along 96 ± 1˚E parallel between 17-28˚N (most frequently 17-23˚N sector), roughly parallel the Ayeyarwady river basin.\n\nFactors controlling earthquake irregularity over Cascadia and Andean share similarity with the TP region but have their respective specialties. Here provides a brief summary before detailed discussion. The granular material filled tunnels left behind by seamounts located on top of the Nazca and Juan Fernandez ridges foster many major Andean earthquakes. Because of the proximity of ENSO, the groundwater fluctuation over the Peru and Chilean coasts bear apparent 4 - 7 year fluctuations. However, unlike the collisional system of TP, there lack through-cut faults to channel precipitation into deeper depth (closer to the locked fault zone). From Section 3.1.1, the deeper the groundwater reservoir resides, the more efficient its mechanical effects. As a result, groundwater fluctuations have to exert effects mechanical-indirectly through aiding granular material generation and distribution. Consequently, earthquakes occur after a significant hiatus. Andes earthquakes are co-controlled by groundwater fluctuation and existing granular material distribution patterns.\n\nCascadia is similar to Andean subduction faults such as Nazca and Juan Fernandez in that they are weak inter-plate coupling and the natural occurrence frequency of earthquakes are rather low. Also, there lack secondary through-cut faults on the continental plates. Cascadia fault is unique in that local precipitation, although being of highest rate in North America, lacks annual to decadal variabilities. That explains the ~600 year natural major seismic cycle. To its south, along the pacific coast cordillera (ranges), lies the Mediterranean climate California, which precipitation is affected by teleconnection patterns such as ENSO and PDO and turned out to be sensitive to climate warming (Ren et al., 2011) . The Cascadia fault is thus more sensitive to Californian precipitation than local precipitation.\n\n4.3. Andes Earthquakes: Co-Controlled by Granular Material Pattern and Groundwater Fluctuation\n\nFigure 5 illustrates the present plates interface geometry at coastal Andes, as CRUST1.0 provided to SEGMENT. Panel (b) shows the ongoing base state creeping structure. The black arrows are horizontal flow speeds. The convergence in this direction is apparent but the total convergence is not apparent because the horizontal velocities rotate counterclockwise in the horizontal plane and the magnitudes are not changing so drastically. The coupling is weak as to the compressive stress between continental and oceanic plates. Especially, the ocean-continental subduction here does not lead to thickening of the continental plate. Baby et al. (1997) suggested that the subduction of oceanic lithosphere coupled with under-plating and a brief episode of gravity spreading contributed to crustal thickening in the back arc of the Central Andes. There still lacks a uniform theory that explains how very thick crusts develop (e.g., anomalies in the map of Christensen and Mooney (1995) ).\n\n4.3.1. Footprints of Seamounts-Role of Granular Material in Andes Earthquakes Spatial Pattern\n\nGranular accumulation along the Peru coast (South of Lima) extends deep inland (Figure 1(d)). The pattern reflects the way the Nazca Ridge passed through the continental plate. Because the thrust is at an acute angle to the continental plate motion, the channels on the overriding plate left by the seamounts is oriented at an angle to the Nazca Ridge’s moving direction (as shown on Figure 6).\n\nFigure 5. Present plates interface geometry at coastal Andes (represented in the SEGMENT model using CRUST 1.0 data). Panel (a) is crust thickness distribution. Panel (b) is a vertical cross-section along the red line in panel (a). The black arrows are horizontal flow speeds. The convergence in this direction is apparent but the total convergence is not apparent because the horizontal velocities rotate counterclockwise and the magnitudes are not changing so drastically. Panel (c) is the density structure in the same vertical cross-section (as in (b)). Over this fault, the motion is driven by mantle flow from underneath. The coupling is weak as to the compressive stress between continental and oceanic plates. Especially, we would like to emphasize that ocean-continental subduction here does not lead to thickening of the continental plate. Geometry data obtained from CRUST1.0 (Laske et al., 2013) .\n\nFigure 6. Granular material (shades are thickness in meter) around the subducting Nazca Ridge under the advancing South American plate. Currently locked regions (A, B to the left) with thick granular material accumulation are connected through macroscopic tunnels with seamounts (A, B and C on the right, over the Nazca Ridge). These seamounts grind the overriding continental plate and created the slanted tunnels. These tunnels collapsed and had earth surface footprints. Bold blue arrow is the moving direction of the flat oceanic slab with sea mounts A-C sitting on. The Hollow red arrow indicates the direction of the “tunnels” on the overriding continental plate plowed by seamounts. Note that the Mw8.1 Chilean 2014 event occurred exactly on the granular belt left by the “handle” of the Juan Fernandez Ridge.\n\nThis is just like chiseling a rotating disk. The granular material leftover also is distributed in stripes oriented the same way. In fact, all stripes originated from the sea mounts extends to the left flanks, due to the relative motion of the underlying and overriding plates. The granular material to the west (A’) ages as old as 14 Ma, whereas granular material at A (on the current flat lab) is newly generated. As the channels get older, they collapse and get wider but shallower, following the mechanisms as described in subsections 1.1 and 2.4 of the SM. These granular stripes become the soft belly of the fault and are the locales of many major earthquakes. Juan Fernandez to the south is very similar to Nazca ridge at present but are very different 3 Ma ago (Yanez et al., 2001; Yuan et al., 2002) . This chain of seamounts plowed beneath Puna at ~10-6 Ma ago like a hockey stick. At present, the “head” of the stick had melt into the upper mantle and only the “handle” left. However, the granular material grinded off by the “head and shaft” of the stick still acts as locales of earthquakes, especially at the seismic section of the subducting fault at the joint of Peru and Chile (at 20S; 70W). Also, the channels and the granular stripes left behind by the seamounts on the Juan Fernandez Ridge are more south-north oriented than those left behind by the Nazca Ridge. The epicenters of major earthquakes after 1800 correspond very well with the granular stripes. The granular stripes left by Juan Fernandez Ridge foster larger (but less frequent) earthquakes. For both seamount ridges, their left (northern) flanks foster larger earthquakes, because the granular accumulation patterns.\n\n4.3.2. Groundwater’s Indirect Control (With Apparent Hiatus) of Andes Earthquakes’ Spatio-Temporal Patterns\n\nFor the Peru and Chilean coast, El Nino and the opposite phase La Nina ocean-atmospheric teleconnection patterns (ENSO) play a critical role in determining the phase of groundwater. Earthquakes, especially those greater than Mw 5.5 but less than Mw7, do not necessarily in phase with the ENSO. This means that, unlike for TP region, the direct mechanical effect, such as indicated by Equation (1), may not be the dominant factor. However, the occurrence frequency of the major earthquakes is at ~4 - 7 years, same as ENSO. Model simulation confirmed that it is the fluctuation, rather than the actual weight of groundwater that is critical for the granular material generation and ensuing earthquakes. In other words, for this region, groundwater caused loading fluctuation affects earthquakes through granular material generation and evolution, rather than through the direct mechanical mechanism (Equation (2)).\n\nEven there are apparent temporal hiatus for most earthquakes between MW5.5 and MW7, all major earthquakes greater than Mw7.5 occurred at dry stages of the groundwater fluctuation cycles. For example, the 1998, 2003 and 2010 earthquakes all occurred at the nadir of the regional gravity field. In a certain sense, ENSO lessened the severity of the earthquakes over the Andes. Without ENSO, all the scattered < Mw7.5 earthquakes will occur as less frequent but more severe 1960-like super earthquakes. Through affecting precipitation partition into evaporation, runoff and infiltration into groundwater reservoir, topographic features have strong correlation of seismogenic zone segmentation (Bejar-Pizarro et al., 2013) . However, it simply is an illusion that surface topographic feature corresponds to ruptures at depth. Instead, the precipitation and the ensuing groundwater distribution contribute to earthquake irregularity. This is primarily because the input of strain energy from the periodic recharge and discharge of groundwater in a large regional area increases the prospect of locking formation, through granular fatigue. Monitoring the fluctuation tides of groundwater thus is a practical way for understanding earthquake cycle. Gravity mapping satellites such as GRACE provide essential assistance.\n\n4.4. Cascadia Subduction Zone’s Stability Affected by Californian Droughts\n\nDetailed discussion of Cascadia subduction fault, such as effective strength ( ${\\mu }^{\\prime }$ ), geometrical characteristics and unique geological conditions can be found in Wang and He (1999) . Here we would like to discuss the extra stress field super-imposed by precipitation fluctuations. Californian droughts (2011-2015), even though hundreds kilometers apart, significantly enhanced the instability of the Cascadia subduction zone. Due to the lack of vertically extended crevasses such as resulted from the India-Eurasia collisional system, groundwater fluctuations take effect through aiding granular material formation. Therefore there is a hiatus between droughts and the seismogenic effects. Rather than a direct temporal correspondence, there is a multiple year hiatus before the seismogenic effects from droughts to set on. For example, the extended Californian drought may contribute to fault instability 2 (southern side Oregon-Washington coast) to 10 years (northern BC coast) later. In addition, the Cascadia fault’s locking is special in that the oceanic plate still is creeping, which is clear from examining the velocity profile. It is locked in a sense that the neighboring regions have large gradients in motion speeds, signifying stress build-up. For example, the Olympic National Park coast of the Cascadia subduction zone is now inter-seismically locked (Figure 5). At the contact, the speed of the continental plate is close to zero. The oceanic plate, however, still is in motion. The continental plate is accumulating elastic potential energy at an annual rate of 7.6 × 1013 J/yr. The estimated apparent frictional coefficient is 0.12. If there is no perturbation on the loading field, the natural frequency of seismic cycle is ~600 year. By releasing elastic energy accumulated, an earthquake of Mw7.8 will be produced (of energy of ~45.7 PJ). However, from SEGMENT simulations, it is apparent that even the Californian drought exerted a cyclic stress on the interface and enhances the fatigue of the interface. If the same hydrological cycle over the past 50 years are representative for the past 800 years, the recurrence frequency is reduced to ~446 ± 70 years. As a result, a >Mw7.5 (~34 PJ) earthquake is expected within the upcoming decade. From the granular material distribution pattern, the rupture likely starts from the southern sector, or between the Oregon coast and the BC, Canada coast.\n\nThe logic of relating California droughts to future Cascadia earthquakes seems counter-intuitive since Cascadia has a lot of its own hydrological processes (it enjoys some of the highest precipitation rates in North America). The reason the fault is more sensitive to Californian precipitation is because of the fluctuation in precipitation, not the absolute amount. The high latitude precipitation over Cascadia region of Canada does not have large inter-annual to inter-decadal variation as in the Californian region, which is dually influenced by ENSO and PDO. More importantly, the Mediterranean climate is sensitive to climate warming (Ren et al., 2011) . Local Canadian precipitation fluctuation would be more direct but the magnitude of fluctuations truly is smaller than that from Californian precipitation. The North American plate cannot keep full rigidity over such a distance. The waveguide for propagating the bridging effect zig-zags ~5.5 wave lengths roughly along the pacific coast cordillera (ranges).\n\n5. Discussion\n\nThe quest to understand Earthquakes is an ancient and universal question. The new earthquake theory and its precepts presented here are a convergence of several factors: recent advances in remote sensing technology; recent, tectonically active stage motivated many researchers to re-evaluate previous assumptions (Wang, 2015) , sometimes to a radical degree, and most critically, a well-vetted landslide model as prototype of an advanced numerical modeling system to handle the mechanics of inter-plate earthquakes and to verify the original hypotheses proposed here.\n\nEarthquake occurs when the plate-coupling stress determined repose angle reaches the sum of two angles representing respectively fault geometry and fault strength. Accurate representation of the evolution of fault strength and repose angle thus is critical for earthquake simulation and prediction. The earthquake triad is a generic framework that applies unanimously. Groundwater fluctuations, although being small, play a pivotal role in connecting the three components together in regulating earthquake irregularity. Groundwater may influence fault instability mechanical-directly by super-imposing a seismogenic lateral stress field that works in synergy with the plate coupling stress; and mechanical-indirectly, or weakens the fault from within, achieved by enhancing fault fatigue through granular material generation. Mechanical-directly alone, groundwater fluctuation superimposed stress field is on the same order as the residual of gravity-aided frictional stress and plate coupling stress thus is non-negligible in affecting the timing of major earthquakes.\n\nClosely centered on the earthquake triad, a minimum requirement for a working earthquake prediction system for realistic 3D fault configuration, the Groundwater aided Granular material Generation framework is proposed here and implemented in the well-vetted SEGMENT numerical modeling system. By exploring existing records of earthquakes using SEGMENT, it is found that for collisional systems, mechanical-direct effects from groundwater control the irregularity of major earthquakes. This is because the orographically spatially highly biased precipitation is effectively channeled into deeper depth by the prevalence of through-cut faults. Droughts elsewhere also are seismogenic but likely take effects mechanical-indirectly. For example, ENSO, as the dominant player for regional precipitation, has strong influence on the gravity field over the Andes region. The occurrence of major earthquakes, although bearing the same 4 - 7 years occurrence frequency of ENSO, has a significant hiatus when compared with the fluctuation in the gravity field. Similarly, the stability of the Cascadia fault is remotely affected by Californian droughts. The input of strain energy from the periodic recharge and discharge of groundwater in a large regional area increases the prospect of un-locking of seismically coupled plates. In a certain sense, climate fluctuations, through extended droughts determining groundwater loading pattern, affect the earthquake cycles.\n\nThis study is a complement to the prevailing Coulomb fracture criterion and Anderson’s theory of faulting (Jaeger, 1963; Fossen, 2010) . The earthquake triad concept reconciles and rectifies some existing conceptual errors in the field. All previous earthquake models considered one or at most two aspects of the triad hence lacked predictive capability. Solution non-uniqueness essentially roots in information paucity. Remotely sensed data found their use in deducing interpolate granular material distribution as well as verifying model simulated precipitation drainage into groundwater reservoir, greatly improved model performance. Although we still cannot make punctilious predictions in timing and location of epicenter, the performance of the model is trenchant enough and deserves further improvements, especially as we are entering a big data era.\n\nAcknowledgements\n\nWe are thankful to Drs Z-X Li and K. Wang for discussion on earthquake mechanisms and various modern approaches for monitoring stress build up. Dr Z-X Li also is appreciated for providing the Andes Geological Survey details. Drs Lance M Leslie, M. Lynch and J Bornman proof-read the first draft of this manuscript. The data used are all publicly available.\n\nThe author declares no competing financial interests.\n\nAppendix 1: Velocity Transformation between SEGMENT’s Sphere Crown Model and Cartesian System of ASPECT\n\nFor generic curvilinear system, we have the following identities.\n\n$\\text{d}\\Re ={H}_{i}{e}_{i}\\text{d}{q}_{i}$ (A1)\n\n$V=\\text{d}\\Re /\\text{d}t={H}_{i}{e}_{i}\\text{d}{q}_{i}/\\text{d}t={V}_{qi}{e}_{i}$ (A2)\n\n$V=\\text{d}\\Re /\\text{d}t=u{i}_{0}+v{j}_{0}+w{k}_{0}$ (A3)\n\nUsing (A2),\n\n$\\left\\{\\begin{array}{l}u=\\text{d}x/\\text{d}t=\\frac{\\partial x}{\\partial {q}_{1}}\\text{d}{q}_{1}/\\text{d}t+\\frac{\\partial x}{\\partial {q}_{2}}\\text{d}{q}_{2}/\\text{d}t+\\frac{\\partial x}{\\partial {q}_{3}}\\text{d}{q}_{3}/\\text{d}t=\\frac{\\partial x}{\\partial {q}_{i}}\\frac{{V}_{qi}}{{H}_{i}}\\\\ v=\\text{d}y/\\text{d}t=\\frac{\\partial y}{\\partial {q}_{1}}\\text{d}{q}_{1}/\\text{d}t+\\frac{\\partial y}{\\partial {q}_{2}}\\text{d}{q}_{2}/\\text{d}t+\\frac{\\partial y}{\\partial {q}_{3}}\\text{d}{q}_{3}/\\text{d}t=\\frac{\\partial y}{\\partial {q}_{i}}\\frac{{V}_{qi}}{{H}_{i}}\\\\ w=\\text{d}z/\\text{d}t=\\frac{\\partial z}{\\partial {q}_{1}}\\text{d}{q}_{1}/\\text{d}t+\\frac{\\partial z}{\\partial {q}_{2}}\\text{d}{q}_{2}/\\text{d}t+\\frac{\\partial z}{\\partial {q}_{3}}\\text{d}{q}_{3}/\\text{d}t=\\frac{\\partial z}{\\partial {q}_{i}}\\frac{{V}_{qi}}{{H}_{i}}\\end{array}$ (A4)\n\nApplying to a spherical coordinate system (Figure A1) with the forward transformation equation of\n\nFigure A1. Geometry representation of the spherical coordinate system as in SEGMENT. Panels (a)-(c) show respectively the projections of unit directional vector ${i}_{0},{j}_{0}$ and ${k}_{0}$ into the spherical coordinates ${\\theta }_{0},{\\lambda }_{\\text{0}}$ and ${r}_{0}$ . The geometrical relationships in (d) is repeatedly used in deriving the cosine relationships of angles composed by the intersection line and two vectors residing respectively in the two mutually perpendicular planes. The projection relationship between the unit cosine directions are summarized in (a)-(c).\n\n$\\left\\{\\begin{array}{l}x=r\\mathrm{cos}\\lambda \\mathrm{cos}\\theta \\\\ y=r\\mathrm{cos}\\lambda \\mathrm{sin}\\theta \\\\ z=r\\mathrm{sin}\\lambda \\end{array}$ (A5)\n\nwhere in geoscience convention, $\\theta$ is longitude, $\\lambda$ is latitude and r is the distance from the earth center (Figure A1). Still in right-handed coordinate system, lets define ${q}_{1}=\\theta ,{q}_{2}=\\lambda ,{q}_{3}=r$ . The corresponding Lame operators:\n\n$\\left\\{\\begin{array}{l}{H}_{\\theta }=\\sqrt{{\\left({{x}^{\\prime }}_{\\theta }\\right)}^{2}+{\\left({{y}^{\\prime }}_{\\theta }\\right)}^{2}+{\\left({{z}^{\\prime }}_{\\theta }\\right)}^{2}}=r\\mathrm{cos}\\lambda \\\\ {H}_{\\lambda }=\\sqrt{{\\left({{x}^{\\prime }}_{\\lambda }\\right)}^{2}+{\\left({{y}^{\\prime }}_{\\lambda }\\right)}^{2}+{\\left({{z}^{\\prime }}_{\\lambda }\\right)}^{2}}=r\\\\ {H}_{r}=\\sqrt{{\\left({{x}^{\\prime }}_{r}\\right)}^{2}+{\\left({{y}^{\\prime }}_{r}\\right)}^{2}+{\\left({{z}^{\\prime }}_{r}\\right)}^{2}}=1\\end{array}$ (A6)\n\nFrom Equation (A2),\n\n$\\left\\{\\begin{array}{l}{V}_{\\theta }=r\\mathrm{cos}\\lambda \\frac{\\text{d}\\theta }{\\text{d}t}\\\\ {V}_{\\lambda }=r\\frac{\\text{d}\\lambda }{\\text{d}t}\\\\ {V}_{r}=\\frac{\\text{d}r}{\\text{d}t}\\end{array}$ (A7)\n\nFrom Equation (A4), we get\n\n$\\left\\{\\begin{array}{l}u=\\frac{\\partial x}{\\partial {q}_{i}}\\frac{{V}_{qi}}{{H}_{i}}=\\frac{\\partial x}{\\partial \\theta }\\frac{{V}_{\\theta }}{{H}_{\\theta }}+\\frac{\\partial x}{\\partial \\lambda }\\frac{{V}_{\\lambda }}{{H}_{\\lambda }}+\\frac{\\partial x}{\\partial r}\\frac{{V}_{r}}{{H}_{r}}\\\\ \\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{ }=\\left(-r\\mathrm{cos}\\lambda \\mathrm{sin}\\theta \\right)\\frac{{V}_{\\theta }}{r\\mathrm{cos}\\lambda }+\\left(-r\\mathrm{sin}\\lambda \\mathrm{cos}\\theta \\right)\\frac{{V}_{\\lambda }}{r}+\\left(\\mathrm{cos}\\lambda \\mathrm{cos}\\theta \\right)\\frac{{V}_{r}}{1}\\\\ \\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{ }=-\\mathrm{sin}\\theta {V}_{\\theta }-\\mathrm{sin}\\lambda \\mathrm{cos}\\theta {V}_{\\lambda }+\\mathrm{cos}\\lambda \\mathrm{cos}\\theta {V}_{r}\\\\ v=\\frac{\\partial y}{\\partial {q}_{i}}\\frac{{V}_{qi}}{{H}_{i}}=\\frac{\\partial y}{\\partial \\theta }\\frac{{V}_{\\theta }}{{H}_{\\theta }}+\\frac{\\partial y}{\\partial \\lambda }\\frac{{V}_{\\lambda }}{{H}_{\\lambda }}+\\frac{\\partial y}{\\partial r}\\frac{{V}_{r}}{{H}_{r}}\\\\ \\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{ }=\\left(r\\mathrm{cos}\\lambda \\mathrm{cos}\\theta \\right)\\frac{{V}_{\\theta }}{r\\mathrm{cos}\\lambda }+\\left(-r\\mathrm{sin}\\lambda \\mathrm{sin}\\theta \\right)\\frac{{V}_{\\lambda }}{r}+\\left(\\mathrm{cos}\\lambda \\mathrm{sin}\\theta \\right)\\frac{{V}_{r}}{1}\\\\ \\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{ }=\\mathrm{cos}\\theta {V}_{\\theta }-\\mathrm{sin}\\lambda \\mathrm{sin}\\theta {V}_{\\lambda }+\\mathrm{cos}\\lambda \\mathrm{sin}\\theta {V}_{r}\\\\ w=\\frac{\\partial z}{\\partial {q}_{i}}\\frac{{V}_{qi}}{{H}_{i}}=\\frac{\\partial z}{\\partial \\theta }\\frac{{V}_{\\theta }}{{H}_{\\theta }}+\\frac{\\partial z}{\\partial \\lambda }\\frac{{V}_{\\lambda }}{{H}_{\\lambda }}+\\frac{\\partial z}{\\partial r}\\frac{{V}_{r}}{{H}_{r}}\\\\ \\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{ }=0\\frac{{V}_{\\theta }}{r\\mathrm{cos}\\lambda }+\\left(r\\mathrm{cos}\\lambda \\right)\\frac{{V}_{\\lambda }}{r}+\\left(\\mathrm{sin}\\lambda \\right)\\frac{{V}_{r}}{1}\\\\ \\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{ }=\\mathrm{cos}\\lambda {V}_{\\lambda }+\\mathrm{sin}\\lambda {V}_{r}\\end{array}$ (A8)\n\nEquation (A8) is forward transformation, or using velocity in spherical coordinates (SEGMENT) to express velocity components in Cartesian system.\n\nDefine matrix $\\begin{array}{c}{I}_{1}=\\left(\\begin{array}{c}{i}_{0}\\\\ {j}_{0}\\\\ {k}_{0}\\end{array}\\right)\\left({\\theta }_{0},{\\lambda }_{\\text{0}},{r}_{0}\\right)=\\left(\\begin{array}{ccc}\\mathrm{cos}\\left({i}_{0},{\\theta }_{0}\\right)& \\mathrm{cos}\\left({i}_{0},{\\lambda }_{\\text{0}}\\right)& \\mathrm{cos}\\left({i}_{0},{r}_{0}\\right)\\\\ \\mathrm{cos}\\left({j}_{0},{\\theta }_{0}\\right)& \\mathrm{cos}\\left({j}_{0},{\\lambda }_{\\text{0}}\\right)& \\mathrm{cos}\\left({j}_{0},{r}_{0}\\right)\\\\ \\mathrm{cos}\\left({k}_{0},{\\theta }_{0}\\right)& \\mathrm{cos}\\left({k}_{0},{\\lambda }_{\\text{0}}\\right)& \\mathrm{cos}\\left({k}_{0},{r}_{0}\\right)\\end{array}\\right)\\\\ =\\left(\\begin{array}{ccc}-\\mathrm{sin}\\theta & -\\mathrm{cos}\\theta \\mathrm{sin}\\lambda & \\mathrm{cos}\\theta \\mathrm{cos}\\lambda \\\\ \\mathrm{cos}\\theta & -\\mathrm{sin}\\theta \\mathrm{sin}\\lambda & \\mathrm{sin}\\theta \\mathrm{cos}\\lambda \\\\ 0& \\mathrm{cos}\\lambda & \\mathrm{sin}\\lambda \\end{array}\\right),\\end{array}$ then Equation (A8) can be expressed as\n\n$\\left(\\begin{array}{c}u\\\\ v\\\\ w\\end{array}\\right)={I}_{1}\\left(\\begin{array}{c}{V}_{\\theta }\\\\ {V}_{\\lambda }\\\\ {V}_{r}\\end{array}\\right)$ (A9).\n\nInverse matrix of I1 is\n\n$\\begin{array}{c}{\\left({I}_{1}\\right)}^{-1}=\\left(\\begin{array}{c}{\\theta }_{0}\\\\ {\\lambda }_{\\text{0}}\\\\ {r}_{0}\\end{array}\\right)\\left({i}_{0},{j}_{0},{k}_{0}\\right)=\\left(\\begin{array}{ccc}\\mathrm{cos}\\left({\\theta }_{0},{i}_{0}\\right)& \\mathrm{cos}\\left({\\theta }_{0},{j}_{0}\\right)& \\mathrm{cos}\\left({\\theta }_{0},{k}_{0}\\right)\\\\ \\mathrm{cos}\\left({\\lambda }_{\\text{0}},{i}_{0}\\right)& \\mathrm{cos}\\left({\\lambda }_{\\text{0}},{j}_{0}\\right)& \\mathrm{cos}\\left({\\lambda }_{\\text{0}},{k}_{0}\\right)\\\\ \\mathrm{cos}\\left({r}_{0},{i}_{0}\\right)& \\mathrm{cos}\\left({r}_{0},{j}_{0}\\right)& \\mathrm{cos}\\left({r}_{0},{k}_{0}\\right)\\end{array}\\right)\\\\ =\\left(\\begin{array}{ccc}-\\mathrm{sin}\\theta & \\mathrm{cos}\\theta & 0\\\\ -\\mathrm{cos}\\theta \\mathrm{sin}\\lambda & -\\mathrm{sin}\\theta \\mathrm{sin}\\lambda & \\mathrm{cos}\\lambda \\\\ \\mathrm{cos}\\theta \\mathrm{cos}\\lambda & \\mathrm{sin}\\theta \\mathrm{cos}\\lambda & \\mathrm{sin}\\lambda \\end{array}\\right)\\end{array}$ (A10).\n\nNotice also that transpose of I1 also is inverse of I1 (I1 is thus an orthogonal matrix). In addition, I1 also is unitary and normal matrix (det(I1) = I).\n\nFrom Equation (A9), $\\left(\\begin{array}{c}{V}_{\\theta }\\\\ {V}_{\\lambda }\\\\ {V}_{r}\\end{array}\\right)={\\left({I}_{1}\\right)}^{-1}\\left(\\begin{array}{c}u\\\\ v\\\\ w\\end{array}\\right)$ , that is\n\n$\\left\\{\\begin{array}{l}{V}_{\\theta }=\\left(-\\mathrm{sin}\\theta \\right)u+\\left(\\mathrm{cos}\\theta \\right)v\\\\ {V}_{\\lambda }=\\left(-\\mathrm{cos}\\theta \\mathrm{sin}\\lambda \\right)u-\\left(\\mathrm{sin}\\theta \\mathrm{sin}\\lambda \\right)v+\\left(\\mathrm{cos}\\lambda \\right)w\\\\ {V}_{r}=\\left(\\mathrm{cos}\\theta \\mathrm{cos}\\lambda \\right)u+\\left(\\mathrm{sin}\\theta \\mathrm{cos}\\lambda \\right)v+\\left(\\mathrm{sin}\\lambda \\right)w\\end{array}$ (A11)\n\nNote also that I1 and its inverse matrix also denote the projection of the unit vectors between the curvilinear and Cartesian coordinate system. For example, simply look at the geometry in Figure (A1), it is apparent that the projection of ${\\lambda }_{\\text{0}}$ in the $\\left({i}_{0},{j}_{0},{k}_{0}\\right)$ directions is exactly the second row in the inverse matrix (Equation (A10)). Similarly, projection of ${j}_{0}$ in the curvilinear coordinate system is the second row in I1, or, ${j}_{0}=\\mathrm{cos}\\theta {\\theta }_{0}-\\mathrm{sin}\\lambda \\mathrm{sin}\\theta {\\lambda }_{\\text{0}}+\\mathrm{cos}\\lambda \\mathrm{sin}\\theta {r}_{0}$ . Notice the similarity with Equation (A8). Actually, plugging these unit vectors’ projection expressions into Equations (A2) and (A3), the velocity transformation equations can be similarly obtained. For example,\n\n$\\begin{array}{c}v=V\\cdot {j}_{0}=\\left({V}_{\\theta }{\\theta }_{0}+{V}_{\\lambda }{\\lambda }_{\\text{0}}+{V}_{r}{r}_{0}\\right)\\left(\\mathrm{cos}\\theta {\\theta }_{0}-\\mathrm{sin}\\lambda \\mathrm{sin}\\theta {\\lambda }_{\\text{0}}+\\mathrm{cos}\\lambda \\mathrm{sin}\\theta {r}_{0}\\right)\\\\ ={V}_{\\theta }\\mathrm{cos}\\theta -\\mathrm{sin}\\lambda \\mathrm{sin}\\theta {V}_{\\lambda }+\\mathrm{cos}\\lambda \\mathrm{sin}\\theta {V}_{r}.\\end{array}$ This is\n\nexactly the second equation of Equation (A8). Similarly, using the direction-cosine relationships, for example, using the projections of ${\\theta }_{0}$ in the original Cartesian coordinate system:\n\n$\\mathrm{cos}\\left({\\theta }_{0},{i}_{0}\\right)=-\\mathrm{sin}\\theta ;\\mathrm{cos}\\left({\\theta }_{0},{j}_{0}\\right)=\\mathrm{cos}\\theta ;\\mathrm{cos}\\left({\\theta }_{0},{k}_{0}\\right)=0$ . It is straightforward to obtain that\n\n${V}_{\\theta }=V\\cdot {\\theta }_{0}=\\left(u{i}_{0}+v{j}_{0}+w{k}_{0}\\right)\\left(-\\mathrm{sin}\\theta {i}_{0}+\\mathrm{cos}\\theta {j}_{0}+0{k}_{0}\\right)=-\\mathrm{sin}\\theta u+\\mathrm{cos}\\theta v$ , exactly the first equation of Equation (A11).\n\nAppendix 2: Viscoelastic Relaxation after Major Earthquakes\n\nRheology of mantle material is generally assumed to be viscoelastic. A combination of Maxwell fluid and Kelvin solid in the form of Burgers rheology is commonly accepted in the geophysics community.\n\nIn Figure A2, the Burgers rheology is represented by a serial connection of a Maxwell fluid of viscosity ${\\eta }_{M}$ and rigidity ${\\mu }_{M}$ and a Kelvin solid of viscosity ${\\eta }_{K}$ and rigidity ${\\mu }_{K}$ . ${\\tau }_{M}$ and ${\\tau }_{K}$ are Maxwell and Kelvin relaxation times.\n\n1) A maxwell fluid is a viscoelastic material having the properties of both elasticity and viscosity. It can be represented by a purely viscous damper and a purely elastic spring connected in series. In this configuration, under applied axial stress, the total stress, ${\\sigma }_{Total}$ and the total strain ${\\epsilon }_{Total}$ can be defined as follows. ${\\sigma }_{Total}={\\sigma }_{D\\left(amper\\right)}={\\sigma }_{S\\left(pring\\right)}$ ; ${\\epsilon }_{Total}={\\epsilon }_{D\\left(amper\\right)}+{\\epsilon }_{S\\left(pring\\right)}$ . Where the subscript D indicates the strain/stress in the damper and the substract S indicates the stress/strain in the spring. For Newtonian fluids, we have\n\n$\\frac{\\text{d}{\\epsilon }_{Dl}}{\\text{d}t}={\\sigma }_{D}{\\eta }_{M}={\\sigma }_{T}{\\eta }_{M}$ ; and from Hook’s law for elastic modulus form $\\frac{\\text{d}{\\epsilon }_{S}}{\\text{d}t}=\\frac{1}{E}\\frac{\\text{d}{\\sigma }_{S}}{\\text{d}t}=\\frac{1}{E}\\frac{\\text{d}{\\sigma }_{T}}{\\text{d}t}$ . The total strain rate thus follows\n\n$\\frac{\\text{d}{\\epsilon }_{T}}{\\text{d}t}=\\frac{\\text{d}{\\epsilon }_{D}}{\\text{d}t}+\\frac{\\text{d}{\\epsilon }_{S}}{\\text{d}t}=\\frac{{\\sigma }_{T}}{\\eta }+\\frac{1}{E}\\frac{\\text{d}{\\sigma }_{T}}{\\text{d}t}.$ (A12)\n\nIn a Maxwell material, stress $\\sigma$ , strain $\\epsilon$ , and their rates of change w.r.t. time are governed by equations of the form $\\stackrel{˙}{\\epsilon }=\\frac{\\sigma }{\\eta }+\\frac{\\stackrel{˙}{\\sigma }}{E}.$\n\nEquation (A12) can be applied either to the shear stress or to the uniform tension in a material. In the former case, the viscous component sorresponds to that for a Newtonian fluid. In the latter case, it has a slightly different meaning in relating stress and rate of strain. The Maxwell model usually is applied to the case of small deformations. For large deformations one should include some geometrical non-linearity. The following we discuss two special scenarios: a) effect of a sudden deformation, or the stress evolution after a sudden deformation; and b) Effect of a sudden stress, or how the strain evolves upon a sudden applied stress in the axial direction.\n\na) Effect of a sudden deformation\n\nIf an M material is suddenly deformed and held to a strain of ${\\epsilon }_{0}$ , then the stress decays with a characteristic time of $\\frac{\\eta }{E}$ .\n\nA brief proof of this starts from the governing equation (Equation A12) with initial condition of $\\epsilon \\left(t=0\\right)={\\epsilon }_{0}$ and strain rate term $\\stackrel{˙}{\\epsilon }=0$ . That is\n\n$\\left\\{\\begin{array}{l}0=\\frac{\\sigma }{\\eta }+\\frac{\\stackrel{˙}{\\sigma }}{E}\\\\ {\\epsilon |}_{t=0}={\\epsilon }_{0},\\text{}\\sigma =\\sigma \\left({\\epsilon }_{\\text{0}}\\right)={\\sigma }_{0};\\text{\\hspace{0.17em}}{\\epsilon }_{0}={\\sigma }_{0}/E\\end{array}$ (A13).\n\nUsing the generic solution for the first order ordinary differential equation (Appendix 3), it is straightforward to obtain the solution for Equation (A13).\n\n$\\sigma \\left(t\\right)={\\sigma }_{0}{\\text{e}}^{-\\frac{E}{\\eta }t}$ (A14)\n\nFigure A2. Burgers fluid as a serial connection of a Maxwell fluid (subscripts M) and a Kelvin solid (subscripts K). The Maxwell fluid and Kelvin solid are respectively serial and parallel connection of pure spring and pure viscous damper filled with Newtonian fluids.\n\nIf we then free the material at time t1, then the elastic element will sprong back by the value of\n\n${\\epsilon }_{back}=-\\frac{\\sigma \\left({t}_{1}\\right)}{E}={\\epsilon }_{0}{\\text{e}}^{-\\frac{E}{\\eta }{t}_{1}}$ (A15)\n\nThis is because at time t1, from Equation (A14), keeping the existing deformation need a confining pressure of $\\sigma \\left({t}_{1}\\right)={\\sigma }_{0}{\\text{e}}^{-\\frac{E}{\\eta }{t}_{1}}={\\sigma }_{0}\\mathrm{exp}\\left(-\\frac{E}{\\eta }{t}_{1}\\right)$ . If remove the confining pressure at this moment, the elastic recovery would be $\\frac{\\sigma \\left({t}_{1}\\right)}{E}=\\frac{1}{E}{\\sigma }_{0}\\mathrm{exp}\\left(-\\frac{E}{\\eta }{t}_{1}\\right)\\stackrel{\\text{usingEq}\\text{.}\\left(\\text{A13}\\right)’\\text{sinitialconditionexpression}}{=}{\\epsilon }_{0}\\mathrm{exp}\\left(-\\frac{E}{\\eta }{t}_{1}\\right)$ . As the viscous element would not return to its original length, the irreversible component of deformation can be simplified as\n\n${\\epsilon }_{irreversible}={\\epsilon }_{0}-{\\epsilon }_{back}={\\epsilon }_{0}\\left(1-\\mathrm{exp}\\left(-\\frac{E}{\\eta }{t}_{1}\\right)\\right)$ (A16)\n\nb) Effect of a sudden stress\n\nIf an M material is suddenly subject to a stress ${\\sigma }_{0}$ , then the elastic element would suddenly deform and the viscous element would deform with a constant rate\n\n$\\epsilon \\left(t\\right)=\\underset{reversible}{\\frac{{\\sigma }_{0}}{E}}+\\underset{irreversible}{\\frac{{\\sigma }_{0}}{\\eta }t}$ . (A17)\n\n(Recall the definition of Newtonian fluid as $\\stackrel{˙}{\\epsilon }=\\frac{\\sigma }{\\eta }$ , part of Equation (A12)).\n\nIf at some time t1 we would release the material, then the deformation of the elastic element would be the spring back deformation and the deformation to the viscous part would not change back. That is at any arbitrary time t1,\n\n${\\epsilon }_{reversible}=\\frac{{\\sigma }_{0}}{E};{\\epsilon }_{irreversible}=\\frac{{\\sigma }_{0}}{\\eta }{t}_{1}$ .\n\nConflicts of Interest\n\nThe authors declare no conflicts of interest.\n\nCite this paper\n\nRen, D. (2019) An Earthquake Model Based on Fatigue Mechanism—A Tale of Earthquake Triad. Journal of Geoscience and Environment Protection, 7, 290-326. doi: 10.4236/gep.2019.78021.\n\n Andes Geological Survey (2006). Audet, P., & Burgmann, R. (2014). Possible Control of Subduction Zone Slow Earthquake Periodicity by Silica Enrichment. Nature, 510, 389-392.https://doi.org/10.1038/nature13391 Baby, P., Rochat, P., Mascle, G., & Hérail, G. (1997). Neogene Shortening Contribution to Crustal Thickening in the Back Arc of the Central Andes. Geology, 25, 883-886. https://doi.org/10.1130/0091-7613(1997)025<0883:NSCTCT>2.3.CO;2 Bejar-Pizarro, M., Socquet, A., Armijo, R., Carrizo, D., & Genrich, J. (2013). Andean Structural Control on Interseismic Coupling in the North Chile Subduction Zone. Nature Geoscience, 6, 462-467. https://doi.org/10.1038/ngeo1802 Bercovici, D., & Ricard, Y. (2014). Plate Tectonics, Damage and Inheritance. Nature, 508, 513-516. https://doi.org/10.1038/nature13072 Bollinger, L., Sapkota, S., Tapponnier, P., Klinger, Y., Rizza, M., Van der Woerd, J., Tiwari, D., Pandey, R., Bitri, A., & Bes de Berc, S. (2014). Estimating the Return Times of Great Himalayan Earthquakes in Eastern Nepal: Evidence from the Patu and Bardibas Strands of the Main Frontal Thrust. Journal of Geophysical Research: Solid Earth, 119, 7123-7163. https://doi.org/10.1002/2014JB010970 Cavalie, O., & Jónsson, S. (2014). Block-Like Plate Movements in Eastern Anatolia Observed by InSAR. Geophysical Research Letters, 41, 26-31.https://doi.org/10.1002/2013GL058170 Chlieh, M., Perfettini, H., Tavera, H., Avouac, J. P., Remy, D., Nocquet, J. M., Rolandone, F., Bondoux, F., Gabalda, G., & Bonvalot, S. (2011). Interseismic Coupling and Seismic Potential along the Central Andes Subduction Zone. Journal of Geophysical Research: Solid Earth, 116, B12405. https://doi.org/10.1029/2010JB008166 Christensen, N., & Mooney, W. (1995). Seismic Velocity Structure and Composition of the Continental Crust: A Global View. Journal of Geophysical Research: Solid Earth, 100, 9761-9788. https://doi.org/10.1029/95JB00259 Cowan, D., Cladouhos, T., & Morgan, J. (2003). Structural Geology and Kinematic History of Rocks Formed along Low-Angle Faults, Death Valley, California. Geological Society of America Bulletin, 115, 1230-1248. https://doi.org/10.1130/B25245.1 Dietrich, W., & Perron, J. (2006). The Search for a Topographic Signature of Life. Nature, 439, 411-418. https://doi.org/10.1038/nature04452 Famiglietti, J., & Rodell, M. (2013). Water in the Balance. Science, 340, 1300-1301.https://doi.org/10.1126/science.1236460 Fossen, H. (2010). Structural Geology (463 p). Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511777806 Gao, X., & Wang, K. (2014). Strength of Stick-Slip and Creeping Subduction Megathrusts from Heat Flow Observations. Science, 345, 1038-1041. https://doi.org/10.1126/science.1255487 Gao, X., & Wang, K. (2017). Rheological Separation of the Megathrust Seismogenic Zone and Episodic Tremor and Slip. Nature, 543, 416-419.https://doi.org/10.1038/nature21389 Gilman, J. (1960). Direct Measurements of the Surface Energy of Crystals. Journal of Applied Physics, 31, 2208-2218. https://doi.org/10.1063/1.1735524 Hayes, G., Wald, D., & Johnson, R. (2012). Slab 1.0: A Three-Dimensional Model of Global Subduction Zone Geometries. Journal of Geophysical Research: Solid Earth, 117, B01302. https://doi.org/10.1029/2011JB008524 Hyndman, R., McCrory, P., Wech, A., Kao, H., & Ague, J. (2015). Cascadia Subducting Plate Fluids Channeled to Fore-Arc Mantle Corner: ETS and Silica Deposition. Journal of Geophysical Research: Solid Earth, 120, 4344-4358. https://doi.org/10.1002/2015JB011920 Jaeger, J. (1963). Extension Failures in Rocks Subject to Fluid Pressure. Journal of Geophysical Research, 68, 6066-6067. https://doi.org/10.1029/JZ068i021p06066 Jiang, X., & Li, Z. (2014). Seismic Reflection Data Support Episodic and Simultaneous Growth of the Tibetan Plateau Since 25 Myr. Nature Communications, 5, 5453.https://doi.org/10.1038/ncomms6453 Jop, P., Forterre, Y., & Pouliquen, O. (2006). A Constitutive Law for Dense Granular Flows. Nature, 441, 727-730. https://doi.org/10.1038/nature04801 Kronbichler, M., Heister, T., & Bangerth, W. (2012). High Accuracy Mantle Convection Simulation through Modern Numerical Methods. Geophysics Journal International, 191, 12-29. https://doi.org/10.1111/j.1365-246X.2012.05609.x Laske, G., Masters, G., Ma, Z., & Pasyanos, M. (2013). Update on CRUST1.0—A 1-Degree Global Model of Earth’s Crust. Lay, T. (2015). The Surge of Great Earthquakes from 2004 to 2014. Earth and Planetary Science Letters, 409, 133-146. https://doi.org/10.1016/j.epsl.2014.10.047 Liu, X., Wang, Y., & Yan, S. (2017). Ground Deformation Associated with Exploitation of Deep Groundwater in Cangzhou City Measured by Multi-Sensor Synthetic Aperture Radar Images. Environmental Earth Sciences, 76, 6. https://doi.org/10.1007/s12665-016-6311-0 Liu, Z., & Bird, P. (2002). Finite Element Modeling of Neotectonics in New Zealand. Journal of Geophysical Research: Solid Earth, 107, ETG 1-1-ETG 1-18. https://doi.org/10.1029/2001JB001075 Loveless, J., & Meade, B. (2010). Geodetic Imaging of Plate Motions, Slip Rates, and Partitioning of Deformation in Japan. Journal of Geophysical Research: Solid Earth, 115, B09301. https://doi.org/10.1029/2008JB006248 Men, K., & Zhao, K. (2016). The 2015 Nepal M8.1 Earthquake and the Prediction for M ≥ 8 Earthquakes in West China. Natural Hazards, 82, 1767-1777. https://doi.org/10.1007/s11069-016-2268-2 Pritchard, M., & Simons, M. (2006). An Aseismic Slip Pulse in Northern Chile and along-Strike Variations in Seismogenic Behavior. Journal of Geophysical Research: Solid Earth, 111, B08405. https://doi.org/10.1029/2006JB004258 Ren, D. (2014a). Storm-Triggered Landslides in Warmer Climates. New York: Springer. Ren, D. (2014b). The Devastating Zhouqu Storm-Triggered Debris Flow of August 2010: Likely Causes and Possible Trends in a Future Warming Climate. Journal of Geophysical Research: Atmospheres, 119, 3643-3662. https://doi.org/10.1002/2013JD020881 Ren, D., & Leslie, L. (2014). Effects of Waves on Tabular Ice-Shelf Calving. Earth Interactions, 18, 1-28. https://doi.org/10.1175/EI-D-14-0005.1 Ren, D., Fu, R., Leslie, L. M., & Dickinson, R. (2011). Predicting Storm-Triggered Landslides. Bulletin of the American Meteorological Society, 92, 129-139.https://doi.org/10.1175/2010BAMS3017.1 Ren, D., Leslie, L. M., & Karoly, D. (2008). Landslide Risk Analysis Using a New Constitutive Relationship for Granular Flow. Earth Interactions, 12, 1-16.https://doi.org/10.1175/2007EI237.1 Ren, D., Leslie, L. M., & Lynch, M. (2012). Verification of Model Simulated Mass Balance, Flow Fields and Tabular Calving Events of the Antarctic Ice Sheet against Remotely Sensed Observations. Climate Dynamics, 40, 2617-2636. https://doi.org/10.1007/s00382-012-1464-3 Ren, D., Leslie, L., Shen, X., Hong, Y., Duan, Q., Mahmood, R., Li, Y., Huang, G., Guo, W., & Lynch, M. (2015). The Gravity Environment of Zhouqu Debris Flow of August 2010 and Its Implication for Future Recurrence. International Journal of Geosciences, 6, 317-325. https://doi.org/10.4236/ijg.2015.64025 Richardson, N., Densmore, A., Seward, D., Fowler, A., Wipf, M., Ellis, M., Yong, L., & Zhang, Y. (2008). Extraordinary Denudation in the Sichuan Basin: Insights from Low-Temperature Thermochronology Adjacent to the Eastern Margin of the Tibetan Plateau. Journal of Geophysical Research: Solid Earth, 113, B04409.https://doi.org/10.1029/2006JB004739 Rowe, C., Kirkpatrick, J., & Brodsky, E. (2012). Fault Rock Injections Record Paleo-Earthquakes. Earth and Planetary Science Letters, 335-336, 154-166. Sawai, Y., Satake, K., Kamataki, T., Nasu, H., Shishikura, M., Atwater, B., Horton, B., Kelsey, H., Nagumo, T., & Yamaguchi, M. (2004). Transient Uplift after a 17th-Century Earthquake along the Kuril Subduction Zone. Science, 306, 1918-1920. https://doi.org/10.1126/science.1104895 Scholz, C. (1998). Earthquakes and Friction Laws. Nature, 391, 37-42.https://doi.org/10.1038/34097 Sun, T., Wang, K., Iinuma, T., Hino, R., He, J., Fujimoto, H., Kido, M., Osada, Y., Miura, S., Ohta, Y., & Hu, Y. (2014). Prevalence of Viscoelastic Relaxation after the 2011 Tohoku-Oki Earthquake. Nature, 514, 84-87. https://doi.org/10.1038/nature13778 Timoshenko, S., & Gere, J. (1963). Theory of Elastic Stability. New York: McGraw-Hill. Ujjie, K., Yamaguchi, A., Kimura, G., & Toh, S. (2007). Fluidization of Granular Material in a Subduction Thrust at Seismogenic Depth. Earth and Planetary Science Letters, 259, 307-318. https://doi.org/10.1016/j.epsl.2007.04.049 Van der Veen, C., & Whillans, I. (1989). Force Budget: I. Theory and Numerical Methods. Journal of Glaciology, 35, 53-60. https://doi.org/10.3189/002214389793701581 Voss, K., Famiglietti, J., Lo, M., de Linage, C., Rodell, M., & Swenson, S. (2013). Groundwater Depletion in the Middle East from GRACE with Implications for Transboundary Water Management in the Tigris-Euphrates-Western Iran Region. Water Resources Research, 49, 904-914. https://doi.org/10.1002/wrcr.20078 Wallace, L., Beavan, J., & McCaffrey, R. (2004). Subduction Zone Coupling and Tectonic Block Rotation in the North Island, New Zealand. Journal of Geophysical Research: Solid Earth, 109. https://doi.org/10.1029/2004JB003241 Wallace, L., Ellis, S., & Mann, P. (2009). Collisional Model for Rapid Fore-Arc Block Rotations, Arc Curvature, and Episodic Back-Arc Rifting in Subduction Settings. Geochemistry, Geophysics, Geosystems, 10, Q05001. https://doi.org/10.1029/2008GC002220 Wallace, L., Kaneko, Y., Hreinsdottir, S., Hamling, I., Peng, Z., Bartlow, N., D’Anastasio, E., & Fry, B. (2017). Large-Scale Dynamic Triggering of Shallow Slow Slip Enhanced by Overlying Sedimentary Wedge. Nature Geoscience, 10, 765-770. https://doi.org/10.1038/ngeo3021 Wang, B., & Ding, Q. (2006). Changes in Global Monsoon Precipitation over the Past 56 Years. Geophysical Research Letters, 33, L06711. https://doi.org/10.1029/2005GL025347 Wang, K. (2015). Subduction Faults as We See Them in the 21st Century. AGU Birch Lecture. Wang, K., & Bilek, S. (2011). Do Subducting Seamounts Generate or Stop Large Earthquakes. Geology, 39, 819-822. https://doi.org/10.1130/G31856.1 Wang, K., & He, J. (1999). Mechanics of Low-Stress Forearcs: Nankai and Cascadia. Journal of Geophysical Research: Solid Earth, 104, 15191-15205. https://doi.org/10.1029/1999JB900103 Wang, K., & Hu, Y. (2006). Accretionary Prisms in Subduction Earthquake Cycles: The Theory of Dynamic Coulomb Wedge. Journal of Geophysical Research: Solid Earth, 111, B06410. https://doi.org/10.1029/2005JB004094 Wang, K., Hu, Y., & He, J. (2012). Deformation Cycles of Subduction Earthquakes in a Viscoelastic Earth. Nature, 484, 327-332. https://doi.org/10.1038/nature11032 Weston, J., Ferreira, A., & Funning, G. (2012). Systematic Comparisons of Earthquake Source Models Determined Using In SAR and Seismic Data. Tectonophysics, 532-535, 61-81. https://doi.org/10.1016/j.tecto.2012.02.001 Williams, C., Eberhart-Phillips, D., Bannister, S., Barker, D., Henrys, S., Reyners, M., & Sutherland, R. (2013). Revised Interface Geometry for the Hikurangi Subduction Zone, New Zealand. Seismological Research Letters, 84, 1066-1073. https://doi.org/10.1785/0220130035 Yanez, G., Ranero, C., Huene, R., & Diaz, J. (2001). Magnetic Anomaly Interpretation across the Southern Central Andes (32-34 S): The Role of the Juan Fernandez Ridge in the Late Tertiary Evolution of the Margin. Journal of Geophysical Research, 106, 6325-6345. https://doi.org/10.1029/2000JB900337 Yuan, X., Sobolev, S., & Kind, R. (2002). Moho Topography in the Central Andes and Its Geodynamic Implications. Earth and Planetary Science Letters, 199, 389-402. https://doi.org/10.1016/S0012-821X(02)00589-7",
null,
""
] | [
null,
"https://www.scirp.org/Images/ccby.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90519917,"math_prob":0.96556383,"size":79492,"snap":"2019-43-2019-47","text_gpt3_token_len":16325,"char_repetition_ratio":0.15992351,"word_repetition_ratio":0.04191919,"special_character_ratio":0.19520204,"punctuation_ratio":0.107611164,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9751732,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-24T06:25:11Z\",\"WARC-Record-ID\":\"<urn:uuid:5bbf2ed1-8b89-4fca-9771-9e6ae08a16b5>\",\"Content-Length\":\"362153\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:436f1a0f-57da-42a0-a767-89570cdc315c>\",\"WARC-Concurrent-To\":\"<urn:uuid:2387651b-0960-4631-9efd-a1707b0943fb>\",\"WARC-IP-Address\":\"173.208.146.83\",\"WARC-Target-URI\":\"https://www.scirp.org/Journal/PaperInformation.aspx?PaperID=94656\",\"WARC-Payload-Digest\":\"sha1:YMOITQH2TDHNBC7OWVHZGDKWSJMMIJZD\",\"WARC-Block-Digest\":\"sha1:QDM22W7ZNZYDS2ZJNOCVLVGYZ7UNFAD3\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987841291.79_warc_CC-MAIN-20191024040131-20191024063631-00126.warc.gz\"}"} |
https://www.ozgrid.com/VBA/sort-alphanumeric.htm | [
"# Excel VBA Video Training",
null,
"/ EXCEL DASHBOARD REPORTS",
null,
"# Sort Alphanumeric Text\n\n## Sort Cells In Excel With Text & Numbers\n\nGot any Excel Questions? Excel Help .\n\nSee Normal Excel Sort and Extract Numbers From Text\n\nSorting Alphanumeric\n\nExcel has a problem trying to sort alphanumeric cells in cells by the number portion only. The reason is simply because Excels Sort feature evaluates each cell value by reading left to right. However, we can over-come this in a few ways with the aid of Excel Macros .\n\nFixed Length Text\n\nIf the alphanumeric cells are all a fixed length we can simply use another column to extract out the numeric portion of the alphanumeric text and then sort by the new column. For example, say we can alphanumeric text in Column A like ABC196, FRH564 etc. We can simply add the formula below to Column B.\n\n=--RIGHT(A1,3)\n\nOR\n\n=--Left(A1,3) for fixed length alphanumeric text like 196GFT\n\nOR\n\n=--MID(A1,5,4) for alphanumeric text like a-bg1290rqty where you know the number Start s at the 5th character and has 4 numbers\n\nThen Fill Down as far as needed. Then we can select Column B, copy and Edit>Paste Special - Values. Next we sort Columns A & B by Column B and then delete Column B.\n\nNOTE: the double negative (--) ensures the number returned is seen as a true number.\n\nSort Alphanumeric\n\nAny Length Alphanumeric Text\n\nA problem comes about when the numeric portion and/or the text portion can be any length. In these cases a macro is best. The code below should be copied to any standard Module (Insert>Module). Then simply run the SortAlphaNumerics Procedure.\n\nIt should be noted that the ExtractNumber Function has 2 optional arguments (Take_decimal and Take_negative). These are both False if omitted. See the table below to see how alphanumeric text is treated.\n\n Alphanumeric Text Formula Result a-bg-12909- =ExtractNumber(A1,,TRUE) -12909 a-bg-12909- =ExtractNumber(A2) 12909 a.a1.2... =ExtractNumber(A3,TRUE) 1.2 a.a1.2... =ExtractNumber(A4) 12 a.a-1.2.... =ExtractNumber(A5,TRUE,TRUE) -1.2 abg1290.11 =ExtractNumber(A6,TRUE) 1290.11 abg129013Agt =ExtractNumber(A7) 129013 abg129012 =ExtractNumber(A8) 129013\n\nAlphanumeric Sorting Code\n\n```'MUST be at top of same public module housing _\n\nSub SortAlphaNumerics and Function ExtractNumber\n\nDim bDec As Boolean, bNeg As Boolean\n\nSub SortAlphaNumerics()\n\nDim wSheetTemp As Worksheet\n\nDim wsStart As Worksheet\n\nDim lLastRow As Long, lReply As Long\n\nDim rSort As Range\n\n''''''''''''''''''''''''''''''''''''''''''\n\n'www.ozgrid.com\n\n'Sorts Alphanumerics Of Column \"A\" of active Sheet.\n\n'ExtractNumber Function REQUIRED\n\n'http://www.ozgrid.com/VBA/ExtractNum.htm\n\n''''''''''''''''''''''''''''''''''''''''''\n\nSet wsStart = ActiveSheet\n\nOn Error Resume Next\n\nSet rSort = Application.InputBox _\n\n(\"Select range to sort. Any heading should be bolded and included\", \"ALPHANUMERIC SORT\", _\nActiveCell.CurrentRegion.Columns(1).Address, , , , , 8)\n\nIf rSort Is Nothing Then Exit Sub\n\nIf rSort.Columns.Count > 1 Then\n\nMsgBox \"Single Column Only\"\n\nSortAlphaNumerics\n\nEnd If\n\n'Application.ScreenUpdating = False\n\nSet rSort = Range(Cells(rSort.Cells(1, 1).Row, rSort.Column), _\nCells(Rows.Count, rSort.Column).End(xlUp))\n\nlReply = MsgBox(\"Include Decimals within numbers\", vbYesNoCancel, \"OZGRID ALPHANUMERIC SORT\")\n\nIf lReply = vbCancel Then Exit Sub\n\nlReply = MsgBox(\"Include negative signs within numbers\", vbYesNoCancel, \"OZGRID ALPHANUMERIC SORT\")\n\nIf lReply = vbCancel Then Exit Sub\n\nlLastRow = rSort.Cells(rSort.Rows.Count).Row\n\nrSort.Copy wSheetTemp.Range(\"A1\")\n\nWith wSheetTemp.Range(\"B1:B\" & lLastRow)\n\n.FormulaR1C1 = \"=ExtractNumber(RC[-1],\" & bDec & \",\" & bNeg & \")\"\n\n.Copy\n\n.PasteSpecial xlPasteValues\n\nApplication.CutCopyMode = False\n\nEnd With\n\nbNeg = False\n\nbDec = False\n\nWith wSheetTemp.UsedRange\n\nOrderCustom:=1, MatchCase:=False, Orientation:=xlTopToBottom, _\nDataOption1:=xlSortNormal\n\nEnd With\n\nwSheetTemp.Delete\n\nApplication.ScreenUpdating = True\n\nEnd Sub\n\nFunction ExtractNumber(rCell As Range, _\nOptional Take_decimal As Boolean, Optional Take_negative As Boolean) As Double\n\nDim iCount As Integer, i As Integer, iLoop As Integer\n\nDim sText As String, strNeg As String, strDec As String\n\nDim lNum As String\n\nDim vVal, vVal2\n\n''''''''''''''''''''''''''''''''''''''''''\n\n'www.ozgrid.com\n\n'Extracts a number from a cell containing text and numbers.\n\n''''''''''''''''''''''''''''''''''''''''''\n\nsText = rCell\n\nIf Take_decimal = True And Take_negative = True Then\n\nstrNeg = \"-\" 'Negative Sign MUST be before 1st number.\n\nstrDec = \".\"\n\nElseIf Take_decimal = True And Take_negative = False Then\n\nstrNeg = vbNullString\n\nstrDec = \".\"\n\nElseIf Take_decimal = False And Take_negative = True Then\n\nstrNeg = \"-\"\n\nstrDec = vbNullString\n\nEnd If\n\niLoop = Len(sText)\n\nFor iCount = iLoop To 1 Step -1\n\nvVal = Mid(sText, iCount, 1)\n\nIf IsNumeric(vVal) Or vVal = strNeg Or vVal = strDec Then\n\ni = i + 1\n\nlNum = Mid(sText, iCount, 1) & lNum\n\nIf IsNumeric(lNum) Then\n\nIf CDbl(lNum) < 0 Then Exit For\n\nElse\n\nlNum = Replace(lNum, Left(lNum, 1), \"\", , 1)\n\nEnd If\n\nEnd If\n\nIf i = 1 And lNum <> vbNullString Then lNum = CDbl(Mid(lNum, 1, 1))\n\nNext iCount\n\nExtractNumber = CDbl(lNum)\n\nEnd Function```\n\nExcel Dashboard Reports & Excel Dashboard Charts 50% Off Become an ExcelUser Affiliate & Earn Money\n\nSpecial! Free Choice of Complete Excel Training Course OR Excel Add-ins Collection on all purchases totaling over \\$64.00. ALL purchases totaling over \\$150.00 gets you BOTH! Purchases MUST be made via this site. Send payment proof to [email protected] 31 days after purchase date."
] | [
null,
"http://ad.linksynergy.com/fs-bin/show",
null,
"http://www.ozgrid.com/images/logo/ozgrid.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5923442,"math_prob":0.9289412,"size":6349,"snap":"2023-40-2023-50","text_gpt3_token_len":1691,"char_repetition_ratio":0.13806146,"word_repetition_ratio":0.04366812,"special_character_ratio":0.27295637,"punctuation_ratio":0.15575221,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99163324,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T15:23:21Z\",\"WARC-Record-ID\":\"<urn:uuid:26689448-6636-4312-8ef9-50e0edb9e69b>\",\"Content-Length\":\"15029\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3524d152-ee9e-4f4d-9188-3e1c3f97933d>\",\"WARC-Concurrent-To\":\"<urn:uuid:2eaa2f7a-5a19-4583-ab18-e917dfaceb11>\",\"WARC-IP-Address\":\"104.26.6.95\",\"WARC-Target-URI\":\"https://www.ozgrid.com/VBA/sort-alphanumeric.htm\",\"WARC-Payload-Digest\":\"sha1:IJPADM5X3VH6PIWSRQCCSOJB4C35H5W7\",\"WARC-Block-Digest\":\"sha1:XT235U3EY2CYBEYO735CZ3UH25U3FQVD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510412.43_warc_CC-MAIN-20230928130936-20230928160936-00799.warc.gz\"}"} |
http://fricas.github.io/api/Poset.html | [
"Poset S¶\n\nholds a complete set together with a structure to codify the partial order. for more documentation see: http://www.euclideanspace.com/prog/scratchpad/mycode/discrete/logic/index.htm Date Created: Aug 2015 Basic Operations: Related packages: UserDefinedPartialOrdering in setorder.spad Related categories: PartialOrder in catdef.spad Related Domains: DirectedGraph in graph.spad Also See: AMS Classifications:\n\n+: (%, %) -> %\n\nfrom FiniteGraph S\n\n=: (%, %) -> Boolean\n\nfrom BasicType\n\n~=: (%, %) -> Boolean\n\nfrom BasicType\n\naddArrow!: (%, NonNegativeInteger, NonNegativeInteger) -> %\n\naddArrow!(s, nm, n1, n2) adds an arrow to the graph s, where: n1 is the index of the start object n2 is the index of the end object This is done in a non-mutable way, that is, the original poset is not changed instead a new one is constructed.\n\naddArrow!: (%, Record(name: String, arrType: NonNegativeInteger, fromOb: NonNegativeInteger, toOb: NonNegativeInteger, xOffset: Integer, yOffset: Integer, map: List NonNegativeInteger)) -> %\n\nfrom FiniteGraph S\n\naddArrow!: (%, String, NonNegativeInteger, NonNegativeInteger) -> %\n\nfrom FiniteGraph S\n\naddArrow!: (%, String, NonNegativeInteger, NonNegativeInteger, List NonNegativeInteger) -> %\n\nfrom FiniteGraph S\n\naddArrow!: (%, String, S, S) -> %\n\nfrom FiniteGraph S\n\naddObject!: (%, Record(value: S, posX: NonNegativeInteger, posY: NonNegativeInteger)) -> %\n\nfrom FiniteGraph S\n\naddObject!(s, n) adds object with coordinates n to the graph s. This is done in a non-mutable way, that is, the original poset is not changed instead a new one is constructed.\n\nfrom FiniteGraph S\n\narrowName: (%, NonNegativeInteger, NonNegativeInteger) -> String\n\nfrom FiniteGraph S\n\narrowsFromArrow: (%, NonNegativeInteger) -> List NonNegativeInteger\n\nfrom FiniteGraph S\n\narrowsFromNode: (%, NonNegativeInteger) -> List NonNegativeInteger\n\nfrom FiniteGraph S\n\narrowsToArrow: (%, NonNegativeInteger) -> List NonNegativeInteger\n\nfrom FiniteGraph S\n\narrowsToNode: (%, NonNegativeInteger) -> List NonNegativeInteger\n\nfrom FiniteGraph S\n\ncoerce: % -> OutputForm\ncompleteReflexivity: % -> %\n\nReflexivity requires forall(x): x<=x This function enforces this by making sure that every element has arrow to itself. That is, the leading diagonal is true.\n\ncompleteTransitivity: % -> %\n\nTransitivity requires forall(x, y, z): x<=y and y<=z implies x<=z This function enforces this by making sure that the composition of any two arrows is also an arrow.\n\ncoverMatrix: % -> IncidenceAlgebra(Integer, S)\n\nthe covering matrix of a list of elements from a comparison function the list is assumed to be topologically sorted, i.e. w.r. to a linear extension of the comparison function f This function is based on code by Franz Lehner. Notes by Martin Baker on the webpage here: url{http://www.euclideanspace.com/prog/scratchpad/mycode/discrete/logic/moebius/}\n\ncreateWidth: NonNegativeInteger -> NonNegativeInteger\n\nfrom FiniteGraph S\n\ncreateX: (NonNegativeInteger, NonNegativeInteger) -> NonNegativeInteger\n\nfrom FiniteGraph S\n\ncreateY: (NonNegativeInteger, NonNegativeInteger) -> NonNegativeInteger\n\nfrom FiniteGraph S\n\ncycleClosed: (List S, String) -> %\n\nfrom FiniteGraph S\n\ncycleOpen: (List S, String) -> %\n\nfrom FiniteGraph S\n\ndeepDiagramSvg: (String, %, Boolean) -> Void\n\nfrom FiniteGraph S\n\ndiagramHeight: % -> NonNegativeInteger\n\nfrom FiniteGraph S\n\ndiagramsSvg: (String, List %, Boolean) -> Void\n\nfrom FiniteGraph S\n\ndiagramSvg: (String, %, Boolean) -> Void\n\nfrom FiniteGraph S\n\ndiagramWidth: % -> NonNegativeInteger\n\nfrom FiniteGraph S\n\ndistance: (%, NonNegativeInteger, NonNegativeInteger) -> Integer\n\nfrom FiniteGraph S\n\ndistanceMatrix: % -> Matrix Integer\n\nfrom FiniteGraph S\n\nfinitePoset: (List S, (S, S) -> Boolean) -> %\n\nconstructor where the set and structure is supplied. The structure is supplied as a predicate function.\n\nfinitePoset: (List S, List List Boolean) -> %\n\nconstructor where the set and structure is supplied.\n\nflatten: DirectedGraph % -> %\n\nfrom FiniteGraph S\n\ngetArr: % -> List List Boolean\n\ngetArr(s) returns a list of all the arrows (or edges) Note: different from getArrows(s) which is inherited from FiniteGraph(S)\n\ngetArrowIndex: (%, NonNegativeInteger, NonNegativeInteger) -> NonNegativeInteger\n\nfrom FiniteGraph S\n\ngetArrows: % -> List Record(name: String, arrType: NonNegativeInteger, fromOb: NonNegativeInteger, toOb: NonNegativeInteger, xOffset: Integer, yOffset: Integer, map: List NonNegativeInteger)\n\nfrom FiniteGraph S\n\ngetVert: % -> List S\n\ngetVert(s) returns a list of all the vertices (or objects) of the graph s. Note: different from getVertices(s) which is inherited from FiniteGraph(S)\n\ngetVertexIndex: (%, S) -> NonNegativeInteger\n\nfrom FiniteGraph S\n\ngetVertices: % -> List Record(value: S, posX: NonNegativeInteger, posY: NonNegativeInteger)\n\nfrom FiniteGraph S\n\nglb: (%, List NonNegativeInteger) -> Union(NonNegativeInteger, failed)\n\n‘greatest lower bound’ or ‘infimum’ In this version of glb nodes are represented as index values. Not every subset of a poset will have a glb in which case “failed” will be returned as an error indication.\n\nhash: % -> SingleInteger\n\nfrom SetCategory\n\nhashUpdate!: (HashState, %) -> HashState\n\nfrom SetCategory\n\nimplies: (%, NonNegativeInteger, NonNegativeInteger) -> Boolean\n\nincidenceMatrix: % -> Matrix Integer\n\nfrom FiniteGraph S\n\ninDegree: (%, NonNegativeInteger) -> NonNegativeInteger\n\nfrom FiniteGraph S\n\nindexToObject: (%, NonNegativeInteger) -> S\n\nreturns the object at a given index\n\ninitial: () -> %\n\nfrom FiniteGraph S\n\nisAcyclic?: % -> Boolean\n\nfrom FiniteGraph S\n\nisAntiChain?: % -> Boolean\n\nis a subset of a partially ordered set such that any two elements in the subset are incomparable\n\nisAntisymmetric?: % -> Boolean\n\nAntisymmetric requires forall(x, y): x<=y and y<=x iff x=y Returns true if this is the case for every element.\n\nisChain?: % -> Boolean\n\nis a subset of a partially ordered set such that any two elements in the subset are comparable\n\nisDirected?: () -> Boolean\n\nfrom FiniteGraph S\n\nisDirectSuccessor?: (%, NonNegativeInteger, NonNegativeInteger) -> Boolean\n\nfrom FiniteGraph S\n\nisFixPoint?: (%, NonNegativeInteger) -> Boolean\n\nfrom FiniteGraph S\n\nisFunctional?: % -> Boolean\n\nfrom FiniteGraph S\n\nisGreaterThan?: (%, NonNegativeInteger, NonNegativeInteger) -> Boolean\n\nfrom FiniteGraph S\n\njoinIfCan: (%, List NonNegativeInteger) -> Union(NonNegativeInteger, failed)\n\nreturns the join of a subset of lattice given by list of elements\n\njoinIfCan: (%, NonNegativeInteger, NonNegativeInteger) -> Union(NonNegativeInteger, failed)\n\nreturns the join of ‘a’ and 'b' In this version of join nodes are represented as index values. In the general case, not every poset will have a join in which case “failed” will be returned as an error indication.\n\nkgraph: (List S, String) -> %\n\nfrom FiniteGraph S\n\nlaplacianMatrix: % -> Matrix Integer\n\nfrom FiniteGraph S\n\nlatex: % -> String\n\nfrom SetCategory\n\nle: (%, NonNegativeInteger, NonNegativeInteger) -> Boolean\n\nfrom Preorder S\n\nloopsArrows: % -> List Loop\n\nfrom FiniteGraph S\n\nloopsAtNode: (%, NonNegativeInteger) -> List Loop\n\nfrom FiniteGraph S\n\nloopsNodes: % -> List Loop\n\nfrom FiniteGraph S\n\nlooseEquals: (%, %) -> Boolean\n\nfrom FiniteGraph S\n\nlowerSet: % -> %\n\na subset U with the property that, if x is in U and x >= y, then y is in U\n\nlub: (%, List NonNegativeInteger) -> Union(NonNegativeInteger, failed)\n\n‘least upper bound’ or ‘supremum’ In this version of lub nodes are represented as index values. Not every subset of a poset will have a lub in which case “failed” will be returned as an error indication.\n\nmap: (%, List NonNegativeInteger, List S, Integer, Integer) -> %\n\nfrom FiniteGraph S\n\nmapContra: (%, List NonNegativeInteger, List S, Integer, Integer) -> %\n\nfrom FiniteGraph S\n\nmax: % -> NonNegativeInteger\n\nfrom FiniteGraph S\n\nmax: (%, List NonNegativeInteger) -> NonNegativeInteger\n\nfrom FiniteGraph S\n\nmeetIfCan: (%, List NonNegativeInteger) -> Union(NonNegativeInteger, failed)\n\nreturns the meet of a subset of lattice given by list of elements\n\nmeetIfCan: (%, NonNegativeInteger, NonNegativeInteger) -> Union(NonNegativeInteger, failed)\n\nreturns the meet of ‘a’ and 'b' In this version of meet nodes are represented as index values. In the general case, not every poset will have a meet in which case “failed” will be returned as an error indication.\n\nmerge: (%, %) -> %\n\nfrom FiniteGraph S\n\nmin: % -> NonNegativeInteger\n\nfrom FiniteGraph S\n\nmin: (%, List NonNegativeInteger) -> NonNegativeInteger\n\nfrom FiniteGraph S\n\nmoebius: % -> IncidenceAlgebra(Integer, S)\n\nmoebius incidence matrix for this poset This function is based on code by Franz Lehner. Notes by Martin Baker on the webpage here: url{http://www.euclideanspace.com/prog/scratchpad/mycode/discrete/logic/moebius/}\n\nnodeFromArrow: (%, NonNegativeInteger) -> List NonNegativeInteger\n\nfrom FiniteGraph S\n\nnodeFromNode: (%, NonNegativeInteger) -> List NonNegativeInteger\n\nfrom FiniteGraph S\n\nnodeToArrow: (%, NonNegativeInteger) -> List NonNegativeInteger\n\nfrom FiniteGraph S\n\nnodeToNode: (%, NonNegativeInteger) -> List NonNegativeInteger\n\nfrom FiniteGraph S\n\nobjectToIndex: (%, S) -> NonNegativeInteger\n\nreturns the index of a given object\n\nopposite: % -> %\n\nconstructs the opposite in the category theory sense of reversing all the arrows\n\noutDegree: (%, NonNegativeInteger) -> NonNegativeInteger\n\nfrom FiniteGraph S\n\npowerSetStructure: S -> %\n\npowerSetStructure(set) is a constructor for a Poset where each element is a set (implemented as a list) and with a subset structure. requires S to be a list.\n\nrouteArrows: (%, NonNegativeInteger, NonNegativeInteger) -> List NonNegativeInteger\n\nfrom FiniteGraph S\n\nrouteNodes: (%, NonNegativeInteger, NonNegativeInteger) -> List NonNegativeInteger\n\nfrom FiniteGraph S\n\nsetArr: (%, List List Boolean) -> Void\n\nsets the list of all arrows (or edges)\n\nsetVert: (%, List S) -> Void\n\nsets the list of all vertices (or objects)\n\nspanningForestArrow: % -> List Tree Integer\n\nfrom FiniteGraph S\n\nspanningForestNode: % -> List Tree Integer\n\nfrom FiniteGraph S\n\nspanningTreeArrow: (%, NonNegativeInteger) -> Tree Integer\n\nfrom FiniteGraph S\n\nspanningTreeNode: (%, NonNegativeInteger) -> Tree Integer\n\nfrom FiniteGraph S\n\nsubdiagramSvg: (Scene SCartesian 2, %, Boolean, Boolean) -> Void\n\nfrom FiniteGraph S\n\nterminal: S -> %\n\nfrom FiniteGraph S\n\nunit: (List S, String) -> %\n\nfrom FiniteGraph S\n\nupperSet: % -> %\n\na subset U with the property that, if x is in U and x <= y, then y is in U\n\nzetaMatrix: % -> IncidenceAlgebra(Integer, S)\n\nzetaMatrix(P) returns the matrix of the zeta function This function is based on code by Franz Lehner. Notes by Martin Baker on the webpage here: url{http://www.euclideanspace.com/prog/scratchpad/mycode/discrete/logic/moebius/}\n\nBasicType\n\nSetCategory"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.669631,"math_prob":0.54890895,"size":10298,"snap":"2022-05-2022-21","text_gpt3_token_len":2734,"char_repetition_ratio":0.33417526,"word_repetition_ratio":0.37483355,"special_character_ratio":0.24266848,"punctuation_ratio":0.2033582,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9903874,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T16:50:42Z\",\"WARC-Record-ID\":\"<urn:uuid:9f31da3b-55e3-4f31-b8fe-94bd6f0f6141>\",\"Content-Length\":\"77289\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e8fca2f2-d8df-4917-be12-f3bffbb7634b>\",\"WARC-Concurrent-To\":\"<urn:uuid:443712ac-d9e2-4cd7-b006-3102e1eb02b5>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"http://fricas.github.io/api/Poset.html\",\"WARC-Payload-Digest\":\"sha1:AMQ6UWVTITQZLXSEQDBVANOUAT6OEQ7M\",\"WARC-Block-Digest\":\"sha1:PZ6F2Z5YW66AOEZKDIUB2AM2CM6OKM6L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320306301.52_warc_CC-MAIN-20220128152530-20220128182530-00126.warc.gz\"}"} |
https://tipcalc.net/how-much-is-a-20-percent-tip-on-70.31 | [
"# Tip Calculator\n\nHow much is a 20 percent tip on \\$70.31?\n\nTIP:\n\\$ 0\nTOTAL:\n\\$ 0\n\nTIP PER PERSON:\n\\$ 0\nTOTAL PER PERSON:\n\\$ 0\n\n## How much is a 20 percent tip on \\$70.31? How to calculate this tip?\n\nAre you looking for the answer to this question: How much is a 20 percent tip on \\$70.31? Here is the answer.\n\nLet's see how to calculate a 20 percent tip when the amount to be paid is 70.31. Tip is a percentage, and a percentage is a number or ratio expressed as a fraction of 100. This means that a 20 percent tip can also be expressed as follows: 20/100 = 0.2 . To get the tip value for a \\$70.31 bill, the amount of the bill must be multiplied by 0.2, so the calculation is as follows:\n\n1. TIP = 70.31*20% = 70.31*0.2 = 14.062\n\n2. TOTAL = 70.31+14.062 = 84.372\n\n3. Rounded to the nearest whole number: 84\n\nIf you want to know how to calculate the tip in your head in a few seconds, visit the Tip Calculator Home.\n\n## So what is a 20 percent tip on a \\$70.31? The answer is 14.06!\n\nOf course, it may happen that you do not pay the bill or the tip alone. A typical case is when you order a pizza with your friends and you want to split the amount of the order. For example, if you are three, you simply need to split the tip and the amount into three. In this example it means:\n\n1. Total amount rounded to the nearest whole number: 84\n\n2. Split into 3: 28\n\nSo in the example above, if the pizza order is to be split into 3, you’ll have to pay \\$28 . Of course, you can do these settings in Tip Calculator. You can split the tip and the total amount payable among the members of the company as you wish. So the TipCalc.net page basically serves as a Pizza Tip Calculator, as well.\n\n## Tip Calculator Examples (BILL: \\$70.31)\n\nHow much is a 5% tip on \\$70.31?\nHow much is a 10% tip on \\$70.31?\nHow much is a 15% tip on \\$70.31?\nHow much is a 20% tip on \\$70.31?\nHow much is a 25% tip on \\$70.31?\nHow much is a 30% tip on \\$70.31?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.959624,"math_prob":0.99352044,"size":2593,"snap":"2021-43-2021-49","text_gpt3_token_len":856,"char_repetition_ratio":0.31517962,"word_repetition_ratio":0.23529412,"special_character_ratio":0.40185115,"punctuation_ratio":0.16235632,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99777305,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T10:14:56Z\",\"WARC-Record-ID\":\"<urn:uuid:324ee658-8154-496e-aabb-2bb62e05df90>\",\"Content-Length\":\"10951\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ecbcf01-8ede-488b-8647-c2d8d8429862>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad331300-afd0-4bdc-847c-556364a6503d>\",\"WARC-IP-Address\":\"161.35.97.186\",\"WARC-Target-URI\":\"https://tipcalc.net/how-much-is-a-20-percent-tip-on-70.31\",\"WARC-Payload-Digest\":\"sha1:575RXO5WDKR2LKV2V5KEDKTDWVJIE77B\",\"WARC-Block-Digest\":\"sha1:STTB6VWZP527E63QR7GF7EEPKGPVSCUV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358702.43_warc_CC-MAIN-20211129074202-20211129104202-00194.warc.gz\"}"} |
https://stattrek.com/statistics/dictionary.aspx?definition=probability%20density%20function | [
"# Statistics Dictionary\n\nTo see a definition, select a term from the dropdown text box below. The statistics dictionary will display the definition, plus links to related web pages.\n\nSelect term:\n\n### Probability Density Function\n\nMost often, the equation used to describe a continuous probability distribution is called a probability density function . Sometimes, it is referred to as a density function, a PDF, or a pdf. For a continuous probability distribution, the density function has the following properties:\n\n• Since the continuous random variable is defined over a continuous range of values (called the domain of the variable), the graph of the density function will also be continuous over that range.\n• The area bounded by the curve of the density function and the x-axis is equal to 1, when computed over the domain of the variable.\n• The probability that a random variable assumes a value between a and b is equal to the area under the density function bounded by a and b.\n\nFor example, consider the probability density function shown in the graph below. Suppose we wanted to know the probability that the random variable X was less than or equal to a. The probability that X is less than or equal to a is equal to the area under the curve bounded by a and minus infinity - as indicated by the shaded area.",
null,
"Note: The shaded area in the graph represents the probability that the random variable X is less than or equal to a. This is a cumulative probability . However, the probability that X is exactly equal to a would be zero. A continuous random variable can take on an infinite number of values. The probability that it will equal a specific value (such as a) is always zero.\n\n See also: Tutorial: Discrete and Continuous Random Variables | Probability Distributions"
] | [
null,
"https://stattrek.com/img/normalprob1.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9034902,"math_prob":0.99692285,"size":1681,"snap":"2019-13-2019-22","text_gpt3_token_len":353,"char_repetition_ratio":0.17173524,"word_repetition_ratio":0.072164945,"special_character_ratio":0.19809636,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998994,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-26T10:56:24Z\",\"WARC-Record-ID\":\"<urn:uuid:c5a1f6ec-f49a-47ee-9748-65a00801a96b>\",\"Content-Length\":\"91948\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9dd58bea-89d9-42dc-b101-7aa8e97ce952>\",\"WARC-Concurrent-To\":\"<urn:uuid:87b862c9-7884-40bc-a2cf-83ca32874633>\",\"WARC-IP-Address\":\"34.237.101.58\",\"WARC-Target-URI\":\"https://stattrek.com/statistics/dictionary.aspx?definition=probability%20density%20function\",\"WARC-Payload-Digest\":\"sha1:NF5XAQTMIMHJ7ZSCXM7YOJW45RIGUB4Q\",\"WARC-Block-Digest\":\"sha1:SJSX3QMRP6O44NJF5MH4KO63UQIHO2CZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232259126.83_warc_CC-MAIN-20190526105248-20190526131248-00246.warc.gz\"}"} |
https://documen.tv/question/a-carbon-resistor-is-to-be-used-as-a-thermometer-on-a-winter-day-when-the-temperature-is-4-00-c-15012693-38/ | [
"## A carbon resistor is to be used as a thermometer. On a winter day when the temperature is 4.00 ∘C, the resistance of the carbon resistor is\n\nQuestion\n\nA carbon resistor is to be used as a thermometer. On a winter day when the temperature is 4.00 ∘C, the resistance of the carbon resistor is 217.0 Ω . What is the temperature on a spring day when the resistance is 215.1 Ω ? Take the temperature coefficient of resistivity for carbon to be α = −5.00×10−4 C−1 .\n\nin progress 0\n1 year 2021-09-02T14:56:01+00:00 1 Answers 57 views 0\n\n21.5 °C.\n\nExplanation:\n\nGiven:\n\nα = −5.00 × 10−4 °C−1\n\nTo = 4°C\n\nRo = 217 Ω\n\nRt = 215.1 Ω\n\nRt/Ro = 1 + α(T – To)\n\n215.1/217 = 1 + (-5 × 10^−4) × (T – 4)\n\n-0.00876 = -5 × 10^−4 × (T – 4)\n\n17.5 = (T – 4)\n\nT = 21.5 °C."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7331478,"math_prob":0.9986235,"size":789,"snap":"2022-40-2023-06","text_gpt3_token_len":307,"char_repetition_ratio":0.1388535,"word_repetition_ratio":0.29113925,"special_character_ratio":0.46007603,"punctuation_ratio":0.14925373,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996096,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T21:45:43Z\",\"WARC-Record-ID\":\"<urn:uuid:6c09afe1-33e4-4152-91bc-9b7d036aa64c>\",\"Content-Length\":\"92290\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78a8b366-2ceb-4e43-a31e-b24ba77c075b>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5e79c80-ca30-47a1-80bf-fd39d4f789bf>\",\"WARC-IP-Address\":\"5.78.45.21\",\"WARC-Target-URI\":\"https://documen.tv/question/a-carbon-resistor-is-to-be-used-as-a-thermometer-on-a-winter-day-when-the-temperature-is-4-00-c-15012693-38/\",\"WARC-Payload-Digest\":\"sha1:VOI6RHMA3FWBGLH7YX6ZWAYAEUW7RW5L\",\"WARC-Block-Digest\":\"sha1:7HAPEEBSBQIQJKW7STCLFTKH77E5BV44\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499768.15_warc_CC-MAIN-20230129211612-20230130001612-00600.warc.gz\"}"} |
https://www.lumoslearning.com/llwp/resources/educational-videos-k-12-elementary-middle-school.html?id=1849479 | [
"The Remainder Theorem - Example 1 - Free Educational videos for Students in K-12 | Lumos Learning\n\n## The Remainder Theorem - Example 1 - Free Educational videos for Students in k-12\n\nTranscript\n00:0\nSummarizer\n\n#### DESCRIPTION:\n\nYoutube Presents The Remainder Theorem - Example 1 an educational video resources on english language arts.\n\n#### OVERVIEW:\n\nThe Remainder Theorem - Example 1 is a free educational video by PatrickJMT.It helps students in grades 9,10,11,12 practice the following standards HSA.APR.B.2.\n\nThis page not only allows students and teachers view The Remainder Theorem - Example 1 but also find engaging Sample Questions, Apps, Pins, Worksheets, Books related to the following topics.\n\n1. HSA.APR.B.2 : Know and apply the Remainder Theorem: For a polynomial p(x) and a number a, the remainder on division by x - a is p(a), so p(a) = 0 if and only if (x - a) is a factor of p(x).\n\n9\n10\n11\n12\n\nHSA.APR.B.2\n\n### RELATED VIDEOS\n\nRate this Video?\n0\n\n0 Ratings & 0 Reviews\n\n5\n0\n0\n4\n0\n0\n3\n0\n0\n2\n0\n0\n1\n0\n0",
null,
"EdSearch WebSearch",
null,
""
] | [
null,
"https://statc.lumoslearning.com/llwp/wp-content/uploads/2018/01/edSearch_Logo_LowRes-e1517334931747.png",
null,
"https://googleads.g.doubleclick.net/pagead/viewthroughconversion/1066999506/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8040768,"math_prob":0.71094394,"size":807,"snap":"2021-31-2021-39","text_gpt3_token_len":212,"char_repetition_ratio":0.14819427,"word_repetition_ratio":0.060606062,"special_character_ratio":0.25402725,"punctuation_ratio":0.17159763,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9649594,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T13:30:47Z\",\"WARC-Record-ID\":\"<urn:uuid:773ef018-bb34-4fc0-85c2-b4870bdd7334>\",\"Content-Length\":\"255637\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d004fa22-5be8-4836-b4bd-cab4798755ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2046805-1dc5-4a36-9608-50677882d75d>\",\"WARC-IP-Address\":\"3.212.85.120\",\"WARC-Target-URI\":\"https://www.lumoslearning.com/llwp/resources/educational-videos-k-12-elementary-middle-school.html?id=1849479\",\"WARC-Payload-Digest\":\"sha1:3P5L6G7NW3NDMY63NXP2PXVDCAIAFKOX\",\"WARC-Block-Digest\":\"sha1:HRWRQLJQ6LHFTGDDU2VFCKILDF6WHZLJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057622.15_warc_CC-MAIN-20210925112158-20210925142158-00267.warc.gz\"}"} |
https://riverkc.com/6016/worksheet/beginner-algebra/ | [
"# Beginner Algebra",
null,
"Division Elementary Algebra Worksheet Printable Elementary Worksheets Basic Math Worksheets Elementary Algebra",
null,
"",
null,
"For Dummies Pre Algebra Basic Math Algebra",
null,
"Algebra For Beginners Worksheet Education Com Algebra Worksheets Basic Algebra Pre Algebra",
null,
"Pre Algebra Beginner Level Taught By Nurten C Afterschoollearning Teachers Bayarea Dublin Sanfrancisco Oakland Hayward Basic Math Pre Algebra Sat Math",
null,
"An Explanation Of Basic Algebra Terms And Terminology Operations Terms Variables Con Algebraic Expressions Vocabulary Basic Algebra Mathematical Expression",
null,
"Worksheets For Kids Free Printables Combining Like Terms Like Terms Algebra Worksheets",
null,
"Important Algebraic Formula Studying Math Basic Math Learning Mathematics",
null,
"23 Basic Algebra Worksheets Pdf Beginning Algebra Worksheets Openlayers Basic Algebra Basic Algebra Worksheets Algebra Worksheets",
null,
"Algebra Cheat Sheet Logarithm 80k Views Algebra Cheat Sheet Algebra Cheat Sheets",
null,
"Simple Algebra Worksheet Free Printable Educational Worksheet Algebra Worksheets Algebra Equations Worksheets Basic Algebra Worksheets",
null,
"Newbie Electronics Hobbyist Reference Poster Bundle Print Download Basic Algebra Math Formulas Math Methods",
null,
"Worksheetfun Free Printable Worksheets Algebra Worksheets Algebra Printable Worksheets",
null,
"Worksheet Basic Algebra 1 Basic Algebra Algebra Algebra 1",
null,
"Algebra For Beginners Worksheet Education Com Algebra Worksheets Basic Algebra Pre Algebra",
null,
"Basic Algebra I Havewho Has Game By Thatmathlady Via Slideshare Basic Algebra Algebra I Algebra Games",
null,
"Algebra 1 Weekly1 Warm Up Starter Algebra Algebra 1 Basic Math Skills",
null,
"8th Grade Math Worksheets Algebra Worksheets 8th Grade Math Worksheets Mathematics Worksheets",
null,
"Introduction To Algebra Vocabulary Worksheet Vocabulary Worksheets Algebra Vocabulary Algebra Worksheets"
] | [
null,
"https://i.pinimg.com/originals/9f/5d/5a/9f5d5a01a7b41bae5a8bf63c8d29d48d.png",
null,
"https://i.pinimg.com/originals/20/3e/15/203e15f04395d527e7687e631876f0ea.png",
null,
"https://i.pinimg.com/originals/9a/0a/5a/9a0a5a54039c6c26101819fb55c3c96d.jpg",
null,
"https://riverkc.com/wp-content/uploads/2021/02/15e9da1a58947052f1b6cb96d954927e-1.gif",
null,
"https://i.pinimg.com/originals/20/3e/15/203e15f04395d527e7687e631876f0ea.png",
null,
"https://i.pinimg.com/originals/d7/b4/21/d7b4215939ea354a904ffa99550789fd.png",
null,
"https://i.pinimg.com/originals/84/20/40/842040eb4dbf78f3e93bdf0a30b2a8e6.gif",
null,
"https://i.pinimg.com/originals/e1/79/16/e17916d690d46d52880364f6d6c4e735.jpg",
null,
"https://i.pinimg.com/736x/e5/25/e7/e525e7e053416c46603e3fdabaf4222e.jpg",
null,
"https://i.pinimg.com/170x/ab/fd/a4/abfda4006f5c5ccf951d3f6d9886eb01.jpg",
null,
"https://riverkc.com/wp-content/uploads/2021/02/ec9f366a89615afc8121dccb9adb9d5b-7.png",
null,
"https://i.pinimg.com/originals/37/15/8f/37158f84c29703a1c3770ecf53e34722.png",
null,
"https://i.pinimg.com/originals/ca/7d/f9/ca7df956a6d0375e0a437be9e57ddea7.png",
null,
"https://i.pinimg.com/originals/16/26/d2/1626d21315d6e19d65461e29a3b3fd94.jpg",
null,
"https://i.pinimg.com/originals/46/e3/22/46e322b1e00ed537a0231c35ff09b0aa.gif",
null,
"https://i.pinimg.com/originals/d1/00/fc/d100fc97e815495bd8e6bf5250adb3b4.jpg",
null,
"https://i.pinimg.com/600x315/43/f8/9c/43f89cd978f0b5637c594546931f85b0.jpg",
null,
"https://i.pinimg.com/564x/5b/b0/9b/5bb09bad7c6aee2e63ef01bf0c2605d6.jpg",
null,
"https://i.pinimg.com/474x/b6/08/cc/b608cc35d698ab1d17440f6ac5ccae8d.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.54631686,"math_prob":0.44304508,"size":1773,"snap":"2021-43-2021-49","text_gpt3_token_len":295,"char_repetition_ratio":0.36065573,"word_repetition_ratio":0.07079646,"special_character_ratio":0.13536379,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995517,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,5,null,null,null,1,null,9,null,1,null,1,null,2,null,null,null,1,null,null,null,1,null,1,null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T08:38:02Z\",\"WARC-Record-ID\":\"<urn:uuid:7711ee52-695f-4e50-8f59-5855f06985bb>\",\"Content-Length\":\"43758\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae8d1276-4def-4c80-845b-15a97a2f42d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf962cb0-bd82-4b0b-8afd-e78625df0466>\",\"WARC-IP-Address\":\"104.21.96.47\",\"WARC-Target-URI\":\"https://riverkc.com/6016/worksheet/beginner-algebra/\",\"WARC-Payload-Digest\":\"sha1:ESKYFK3KMIZU325IY72V2P22SPJ5QMFW\",\"WARC-Block-Digest\":\"sha1:V3PMOG6DZHB456YXJSHJIKPI24FY43ZD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584554.98_warc_CC-MAIN-20211016074500-20211016104500-00121.warc.gz\"}"} |
https://hakanyurdakul.com/category/math/ | [
"## Curve Point Distribution by Simpson Rule\n\nC++. This project is still going on; it needs for some improvements. Its purpose is to equally distribute desired number of points on a curve. To do that, I applied Simpson rule by sweeping small […]\n\n## Latitude Point Distribution\n\nC++. This is for symmetrical distribution of points in each latitude on a sphere. As an input, different number of points for each latitude can be assigned. Top and bottom coordinates, of course, are the […]\n\n## Deepest Pit\n\nC++. This is a question from Codility. Question: A non-empty zero-indexed array A consisting of N integers is given. A pit in this array is any triplet of integers (P, Q, R) such that: 0 ≤ P < […]\n\n## Top Cut of Convex Hull\n\nC++. I developed an algorithm based on “three penny algorithm” to find the top cut of a convex hull for dense circle shaped data. It takes the most left point as an entry point like Jarvis’s […]\n\n## Totient Function\n\nC++. This is the Problem 69 from Project Euler. Question: Euler’s Totient function, φ(n) [sometimes called the phi function], is used to determine the number of numbers less than n which are relatively prime to n. […]\n\n## Pell Equation\n\nC++. This is the Problem 66 from Project Euler. Question: Consider quadratic Diophantine equations of the form: x2 – Dy2 = 1 For example, when D=13, the minimal solution in x is 6492 – 13×1802 = 1. It can be assumed that there are […]\n\n## Pentagonal Numbers\n\nC++. This is the Problem 44 from Project Euler. Question: Pentagonal numbers are generated by the formula, Pn=n(3n−1)/2. The first ten pentagonal numbers are: 1, 5, 12, 22, 35, 51, 70, 92, 117, 145, … It […]\n\n## Huge Multiplication\n\nC++, Matlab. This was one of the assignments in the algorithm course I took before. Inputs are two big integers like 60 digits each. I implemented traditional multiplication method into my algorithm. Multiplication stage takes […]\n\n## Digit Fifth Powers\n\nC++, Matlab. This is the Problem 30 from Project Euler. Question: Surprisingly there are only three numbers that can be written as the sum of fourth powers of their digits: 1634 = 1^4 + 6^4 + 3^4 + […]\n\n## Numbers on Spiral Diagonals\n\nC++, Matlab. This is the Problem 28 from Project Euler website. Question: Starting with the number 1 and moving to the right in a clockwise direction a 5 by 5 spiral is formed as follows: 21 22 […]\n\n1 2"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91184485,"math_prob":0.96955824,"size":2332,"snap":"2023-14-2023-23","text_gpt3_token_len":577,"char_repetition_ratio":0.12972508,"word_repetition_ratio":0.19138756,"special_character_ratio":0.27744424,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976782,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T19:58:58Z\",\"WARC-Record-ID\":\"<urn:uuid:97f591be-5608-4fd4-a888-46a1c5cd35a9>\",\"Content-Length\":\"49530\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b98136bd-8b58-464c-bc64-2c82891af6cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:1397e6fd-3ef9-4933-990b-fc4be485f130>\",\"WARC-IP-Address\":\"92.249.44.254\",\"WARC-Target-URI\":\"https://hakanyurdakul.com/category/math/\",\"WARC-Payload-Digest\":\"sha1:GXNOXOLQN4EQJSJR3JIAIECNL6PIMSOE\",\"WARC-Block-Digest\":\"sha1:SZN7FNAMZEMFPEQJXQY6QDFTYDIKHKCX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224650264.9_warc_CC-MAIN-20230604193207-20230604223207-00532.warc.gz\"}"} |
https://www.aimsciences.org/journal/1078-0947/2007/17/3 | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"ISSN:\n1078-0947\n\neISSN:\n1553-5231\n\n## Journal Home\n\n• Open Access Articles",
null,
"All Issues\n\n## Discrete & Continuous Dynamical Systems - A\n\nAugust 2007 , Volume 17 , Issue 3\n\nSelect all articles\n\nExport/Reference:\n\n2007, 17(3): 449-480 doi: 10.3934/dcds.2007.17.449 +[Abstract](1572) +[PDF](336.5KB)\nAbstract:\nIn this paper we prove the existence of a solution in Lloc$(\\Omega)$ to the Euler-Lagrange equation for the variational problem\n\ninf$\\bar u + W^{1,\\infty}_0(\\Omega)$$\\int_{\\Omega} (\\I_D(\\nabla u) + g(u)) dx,\\ \\ \\$ (0.1)\n\nwith $D$ convex closed subset of $\\R^n$ with non empty interior. We next show that if D* is strictly convex, then the Euler-Lagrange equation can be reduced to an ODE along characteristics, and we deduce that the solution to Euler-Lagrange is different from $0$ a.e. and satisfies a uniqueness property. Using these results, we prove a conjecture on the existence of variations on vector fields .\n\n2007, 17(3): 481-500 doi: 10.3934/dcds.2007.17.481 +[Abstract](2088) +[PDF](281.9KB)\nAbstract:\nWe study the relations between the global dynamics of the 3D Leray-$\\alpha$ model and the 3D Navier-Stokes system. We prove that time shifts of bounded sets of solutions of the Leray-$\\alpha$ model converges to the trajectory attractor of the 3D Navier-Stokes system as time tends to infinity and $\\alpha$ approaches zero. In particular, we show that the trajectory attractor of the Leray-$\\alpha$ model converges to the trajectory attractor of the 3D Navier-Stokes system when $\\alpha \\rightarrow 0\\+.$\n2007, 17(3): 501-507 doi: 10.3934/dcds.2007.17.501 +[Abstract](1711) +[PDF](125.7KB)\nAbstract:\nWe give an example where for an open set of Lagrangians on the n-torus there is at least one cohomology class c with at least n different ergodic c-minimizing measures. One of the problems posed by Ricardo Mañé in his paper 'Generic properties and problems of minimizing measures of Lagrangian systems' (Nonlinearity, 1996) was the following:\nIs it true that for generic Lagrangians every minimizing measure is uniquely ergodic?\nA weaker statement is that for generic Lagrangians every cohomology class has exactly one minimizing measure, which of course will be ergodic. Our example shows that this can't be true and as a consequence one can hope to prove at most that for a generic Lagrangian, for every cohomology class there are at most n corresponding ergodic minimizing measures, where n is the dimension of the first cohomology group.\n2007, 17(3): 509-528 doi: 10.3934/dcds.2007.17.509 +[Abstract](1781) +[PDF](256.9KB)\nAbstract:\nIn this paper, we study various dissipative mechanics associated with the Boussinesq systems which model two-dimensional small amplitude long wavelength water waves. We will show that the decay rate for the damped one-directional model equations, such as the KdV and BBM equations, holds for some of the damped Boussinesq systems.\n2007, 17(3): 529-540 doi: 10.3934/dcds.2007.17.529 +[Abstract](2096) +[PDF](167.9KB)\nAbstract:\nWe consider the Lorenz system $\\dot x = \\s (y-x)$, $\\dot y =rx -y-xz$ and $\\dot z = -bz + xy$; and the Rössler system $\\dot x = -(y+z)$, $\\dot y = x +ay$ and $\\dot z = b-cz + xz$. Here, we study the Hopf bifurcation which takes place at $q_{\\pm}=(\\pm\\sqrt{br-b},\\pm\\sqrt{br-b},r-1),$ in the Lorenz case, and at $s_{\\pm}=(\\frac{c+\\sqrt{c^2-4ab}}{2},-\\frac{c+\\sqrt{c^2-4ab}}{2a}, \\frac{c\\pm\\sqrt{c^2-4ab}}{2a})$ in the Rössler case. As usual this Hopf bifurcation is in the sense that an one -parameter family in ε of limit cycles bifurcates from the singular point when ε=0. Moreover, we can determine the kind of stability of these limit cycles. In fact, for both systems we can prove that all the bifurcated limit cycles in a neighborhood of the singular point are either a local attractor, or a local repeller, or they have two invariant manifolds, one stable and the other unstable, which locally are formed by two $2$-dimensional cylinders. These results are proved using averaging theory. The method of studying the Hopf bifurcation using the averaging theory is relatively general and can be applied to other $3$- or $n$-dimensional differential systems.\n2007, 17(3): 541-552 doi: 10.3934/dcds.2007.17.541 +[Abstract](1446) +[PDF](192.1KB)\nAbstract:\nIn this paper we consider some systems of ordinary differential equations which are related to coagulation-fragmentation processes. In particular, we obtain explicit solutions $\\{c_k(t)\\}$ of such systems which involve certain coefficients obtained by solving a suitable algebraic recurrence relation. The coefficients are derived in two relevant cases: the high-functionality limit and the Flory-Stockmayer model. The solutions thus obtained are polydisperse (that is, $c_k(0)$ is different from zero for all $k \\ge 1$) and may exhibit monotonically increasing or decreasing total mass. We also solve a monodisperse case (where $c_1(0)$ is different from zero but $c_k(0)$ is equal to zero for all $k \\ge 2$) in the high-functionality limit. In contrast to the previous result, the corresponding solution is now shown to display a sol-gel transition when the total initial mass is larger than one, but not when such mass is less than or equal to one.\n2007, 17(3): 553-560 doi: 10.3934/dcds.2007.17.553 +[Abstract](1503) +[PDF](145.6KB)\nAbstract:\nThe sectional-hyperbolic sets constitute a class of partially hyperbolic sets introduced in to describe robustly transitive singular dynamics on $n$-manifolds (e.g. the multidimensional Lorenz attractor ). Here we prove that a transitive sectional-hyperbolic set with singularities contains no local strong stable manifold through any of its points. Hence a transitive, isolated, sectional-hyperbolic set containing a local strong stable manifold is a hyperbolic saddle-type repeller. In addition, a proper transitive sectional-hyperbolic set on a compact $n$-manifold has empty interior and topological dimension $\\leq n-1$. It follows that a singular-hyperbolic attractor with singularities on a compact $3$-manifold has topological dimension $2$. Hence such an attractor is expanding, i.e., its topological dimension coincides with the dimension of its central subbundle. These results apply to the robustly transitive sets considered in , and also to the Lorenz attractor in the Lorenz equation .\n2007, 17(3): 561-587 doi: 10.3934/dcds.2007.17.561 +[Abstract](1572) +[PDF](316.7KB)\nAbstract:\nWe study chain transitivity and Morse decompositions of discrete and continuous-time semiflows on fiber bundles with emphasis on (generalized) flag bundles. In this case an algebraic description of the chain transitive sets is given. Our approach consists in embedding the semiflow in a semigroup of continuous maps to take advantage of the good properties of the semigroup actions on the flag manifolds.\n2007, 17(3): 589-627 doi: 10.3934/dcds.2007.17.589 +[Abstract](1238) +[PDF](507.3KB)\nAbstract:\nWe present a model illustrating heterodimensional cycles (i.e., cycles associated to saddles having different indices) as a mechanism leading to the collision of hyperbolic homoclinic classes (of points of different indices) and thereafter to the persistence of (heterodimensional) cycles. The collisions are associated to secondary (saddle-node) bifurcations appearing in the unfolding of the initial cycle.\n2007, 17(3): 629-652 doi: 10.3934/dcds.2007.17.629 +[Abstract](1620) +[PDF](629.6KB)\nAbstract:\nWe present a topological method of obtaining the existence of infinite number of symmetric periodic orbits for systems with reversing symmetry. The method is based on covering relations. We apply the method to a four-dimensional reversible map.\n2007, 17(3): 653-670 doi: 10.3934/dcds.2007.17.653 +[Abstract](1430) +[PDF](242.9KB)\nAbstract:\nProperties of the sequence ind$(\\infty,\\id-f^{m}),$ $m=1,2,\\ldots$ where $f^{m}$ is the $m$-th iterate of the mapping $f:R^{d}\\to R^{d}$, and ind denotes the Kronecker index are investigated. The case when $f$ has the asymptotic derivative $A$ at infinity and some eigenvalues of $A$ are roots of unity is of primary interest.\n2007, 17(3): 671-689 doi: 10.3934/dcds.2007.17.671 +[Abstract](1578) +[PDF](567.6KB)\nAbstract:\nPerron-Frobenius operators and their eigendecompositions are increasingly being used as tools of global analysis for higher dimensional systems. The numerical computation of large, isolated eigenvalues and their corresponding eigenfunctions can reveal important persistent structures such as almost-invariant sets, however, often little can be said rigorously about such calculations. We attempt to explain some of the numerically observed behaviour by constructing a hyperbolic map with a Perron-Frobenius operator whose eigendecomposition is representative of numerical calculations for hyperbolic systems. We explicitly construct an eigenfunction associated with an isolated eigenvalue and prove that a special form of Ulam's method well approximates the isolated spectrum and eigenfunctions of this map.\n2007, 17(3): 690-690 doi: 10.3934/dcds.2007.17.690 +[Abstract](1225) +[PDF](45.9KB)\nAbstract:\nN/A\n\n2018 Impact Factor: 1.143"
] | [
null,
"https://www.aimsciences.org:443/style/web/images/white_google.png",
null,
"https://www.aimsciences.org:443/style/web/images/white_facebook.png",
null,
"https://www.aimsciences.org:443/style/web/images/white_twitter.png",
null,
"https://www.aimsciences.org:443/style/web/images/white_linkedin.png",
null,
"https://www.aimsciences.org/fileAIMS/journal/img/cover/book_cover.png",
null,
"https://www.aimsciences.org:443/style/web/images/OA.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89119476,"math_prob":0.9812513,"size":7991,"snap":"2019-51-2020-05","text_gpt3_token_len":1920,"char_repetition_ratio":0.10229123,"word_repetition_ratio":0.02242525,"special_character_ratio":0.21761982,"punctuation_ratio":0.08067941,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9942216,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T19:50:12Z\",\"WARC-Record-ID\":\"<urn:uuid:c4ab3dfd-7a36-4697-b9e3-44981a11d7f7>\",\"Content-Length\":\"123655\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5e8000e-a3a5-447a-a514-2a0b8e3d14c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf834e7e-9057-4541-bb55-907bb3f3409d>\",\"WARC-IP-Address\":\"216.227.221.143\",\"WARC-Target-URI\":\"https://www.aimsciences.org/journal/1078-0947/2007/17/3\",\"WARC-Payload-Digest\":\"sha1:3BE5IHRAJHD3QMEQFMU7H4D3IKSQV5QF\",\"WARC-Block-Digest\":\"sha1:MVP2L7REGGQ6VLZ4TXV75RDDWQNTAW7N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594705.17_warc_CC-MAIN-20200119180644-20200119204644-00218.warc.gz\"}"} |
https://escholarship.org/uc/item/1cw491jk | [
"",
null,
"Open Access Publications from the University of California\n\n## Last Passage Percolation and the Slow Bond Problem\n\n• Author(s): Sarkar, Sourav\n• et al.\nAbstract\n\nLast passage percolation models are fundamental examples in statistical mechanics where the energy of a path is maximized over all directed paths with given endpoints in a random environment, and the maximizing paths are called {\\em geodesics}. Here we consider the Poissonian last passage percolation (LPP) and the exponential directed last passage percolation\n\n(DLPP), the latter having a standard coupling with another classical interacting particle system, the totally asymmetric simple exclusion process or TASEP. These belong to the so-called KPZ universality class, for which exact algebraic formulae have led to precise results for fluctuations and scaling limits. However, such formulae are not very robust and studying the geometry of the geodesics can often provide new insights into these models. Here we consider three problems in each of these three models; exponential DLPP, TASEP and Poissonian LPP, and see how geometric and probabilistic techniques solve such problems.\n\nIn the first problem, we study finer properties of the coalescence structure of finite and semi-infinite geodesics for exactly solvable models of last passage percolation. We consider directed last passage percolation on $\\Z^2$ with i.i.d. exponential weights on the vertices. Fix two points $v_1=(0,0)$ and $v_2=(0, \\lfloor k^{2/3} \\rfloor)$ for some $k>0$, and consider the maximal paths $\\Gamma_1$ and $\\Gamma_2$ starting at $v_1$ and $v_2$ respectively to the point $(n,n)$ for $n\\gg k$. Our object of study is the point of coalescence, i.e., the point $v\\in \\Gamma_1\\cap \\Gamma_2$ with smallest $|v|_1$. We establish that the distance to coalescence $|v|_1$ scales as $k$, by showing the upper tail bound $\\P(|v|_1> Rk) \\leq R^{-c}$ for some $c>0$. We also consider the problem of coalescence for semi-infinite geodesics. For the almost surely unique semi-infinite geodesics in the direction $(1,1)$ starting from $v_3=(-\\lfloor k^{2/3} \\rfloor , \\lfloor k^{2/3}\\rfloor)$ and $v_4=(\\lfloor k^{2/3} \\rfloor ,- \\lfloor k^{2/3}\\rfloor)$, we establish the optimal tail estimate $\\P(|v|_1> Rk) \\asymp R^{-2/3}$, for the point of coalescence $v$. This answers a question left open by Pimentel(2016) who proved the corresponding lower bound.\n\nNext, we study the slow bond\" model, where the totally asymmetric simple exclusion process (TASEP) on $\\Z$ is modified by adding a slow bond at the origin. The slow bond increases the particle density immediately to its left and decreases the particle density immediately to its right. Whether or not this effect is detectable in the macroscopic current started from the step initial condition has attracted much interest over the years and this question was settled recently in Basu-Sidoravicius-Sly (2014) where it was shown that the current is reduced even for arbitrarily small strength of the defect. Following non-rigorous physics arguments in Janowsky-Lebowitz(1992,1994) and some unpublished works by Bramson, a conjectural description of properties of invariant measures of TASEP with a slow bond at the origin was provided by Liggett in his book (1999). We establish Liggett's conjectures and in particular show that, starting from step initial condition, TASEP with a slow bond at the origin, as a Markov process, converges in law to an invariant measure that is asymptotically close to product measures with different densities far away from the origin towards left and right. Our proof exploits the correspondence between TASEP and the last passage percolation on $\\Z^2$ with exponential weights and uses the understanding of geometry of maximal paths in those models.\n\nFinally, we study the modulus of continuity of polymer fluctuations and weight profiles in Poissonian LPP. The geodesics and their energy in Poissonian LPP can be scaled so that transformed geodesics cross unit distance and have fluctuations and scaled energy of unit order, and we refer to scaled geodesics\n\nas {\\em polymers} and\n\ntheir scaled energies as {\\em weights}. Polymers may be viewed as random functions of the vertical coordinate and, when they are, we show that they have modulus of continuity whose order is at most $t^{2/3}\\big(\\log t^{-1}\\big)^{1/3}$.\n\nThe power of one-third in the logarithm may be expected to be sharp and in a related problem we show that it is: among polymers in the unit box whose endpoints have vertical separation $t$ (and a horizontal separation of the same order),\n\nthe maximum transversal fluctuation has order $t^{2/3}\\big(\\log t^{-1}\\big)^{1/3}$.\n\nRegarding the orthogonal direction, in which growth occurs, we show that, when one endpoint of the polymer is fixed at $(0,0)$ and the other is varied vertically over $(0,z)$, $z\\in [1,2]$, the resulting random weight profile has sharp modulus of continuity of order $t^{1/3}\\big(\\log t^{-1}\\big)^{2/3}$.\n\nIn this way, we identify exponent pairs of $(2/3,1/3)$\n\nand $(1/3,2/3)$ in power law and polylogarithmic correction, respectively for polymer fluctuation, and polymer weight under vertical endpoint perturbation.\n\nThe two exponent pairs describe [Hammond(2012, 2012, 2011)] the fluctuation of the boundary separating two phases in subcritical planar random cluster models."
] | [
null,
"https://escholarship.org/images/logo_eschol-mobile.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87680197,"math_prob":0.9929334,"size":5393,"snap":"2019-43-2019-47","text_gpt3_token_len":1335,"char_repetition_ratio":0.0957506,"word_repetition_ratio":0.035409037,"special_character_ratio":0.24105322,"punctuation_ratio":0.08895706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.997257,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T07:14:12Z\",\"WARC-Record-ID\":\"<urn:uuid:7ee5db3b-1274-4bf9-99ba-14ed17bba82d>\",\"Content-Length\":\"66222\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dea4c09a-7053-43a3-bd57-4a506e1bc37d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5ec6a7c8-74e4-444f-a42e-e79da233473f>\",\"WARC-IP-Address\":\"13.225.62.103\",\"WARC-Target-URI\":\"https://escholarship.org/uc/item/1cw491jk\",\"WARC-Payload-Digest\":\"sha1:7PYYGMWW2GWIKR3JGA4W5A6KHOL7CV4B\",\"WARC-Block-Digest\":\"sha1:HIFEJPPRDKOLSB7M3ESYJYFVU3LNAFIK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670036.23_warc_CC-MAIN-20191119070311-20191119094311-00111.warc.gz\"}"} |
https://www.autoitscript.com/forum/topic/141207-sorting-array-using-main-and-secondary-index/ | [
"## Recommended Posts\n\nHi!\n\nThis is my small contribution.\n\nThis code will first generate a 2 column array, and random some values. It will then sort the array using the 2nd column, and then sort some more using the 1st column. I have searched a lot and didnt found anything so i build mine!\n\nTO DO: Use UBOUND(ARRAY) to find the array size\n\n```#include <array.au3>\n;create Sample Array\nLocal \\$Array\n\\$Array = \"index\"\nFor \\$i = 1 To 99\n\\$Array[\\$i] = Int(Random() * 5) * 10\n\\$Array[\\$i] = Int(Random() * 5) * 10\nNext\n\\$Array = 33\n\\$Array = 33\n\\$Array = 22\n\\$Array = 22\n_ArrayDisplay(\\$Array,\"Before\")\n\\$Main_Index = 2\n\\$Secondary_Index = 1\n_ArraySort(\\$Array, 0, 0, 0, \\$Main_Index)\n\\$start = 1\nDo\n\\$i = \\$start\nIf \\$Array[\\$i][\\$Main_Index] = \\$Array[\\$i + 1][\\$Main_Index] Then\nDo\n\\$i += 1\nUntil \\$i = 99 Or \\$Array[\\$i][\\$Main_Index] <> \\$Array[\\$i + 1][\\$Main_Index]\n_ArraySort(\\$Array, 1, \\$start, \\$i, \\$Secondary_Index);small sort secondary index\nEndIf\n\\$start = \\$i + 1\nUntil \\$start >= 98 ;Array size -1\n_ArrayDisplay(\\$Array,\"After\")```\n\nHope it helps\n\n## Create an account\n\nRegister a new account\n\n• ### Recently Browsing 0 members\n\n×\n\n• Wiki\n\n• Back\n\n• #### Beta\n\n• Git\n• FAQ\n• Our Picks\n×\n• Create New..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5689483,"math_prob":0.97832525,"size":1120,"snap":"2020-45-2020-50","text_gpt3_token_len":377,"char_repetition_ratio":0.16129032,"word_repetition_ratio":0.022222223,"special_character_ratio":0.38035715,"punctuation_ratio":0.10599078,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9737011,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T23:35:39Z\",\"WARC-Record-ID\":\"<urn:uuid:27772740-1472-44ab-abb2-3bd8be15c8df>\",\"Content-Length\":\"74032\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa31b191-15d6-4282-8b84-5ca7d1e60269>\",\"WARC-Concurrent-To\":\"<urn:uuid:bcee909a-ded3-42c2-a528-16af22ac84ed>\",\"WARC-IP-Address\":\"212.227.91.231\",\"WARC-Target-URI\":\"https://www.autoitscript.com/forum/topic/141207-sorting-array-using-main-and-secondary-index/\",\"WARC-Payload-Digest\":\"sha1:VBDOGGID7EEQW4K6BQ3WPQVZXWCWUJZF\",\"WARC-Block-Digest\":\"sha1:GJCC6KGLI37OJF3GDMOPLVJLWFSWX7KX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141515751.74_warc_CC-MAIN-20201130222609-20201201012609-00646.warc.gz\"}"} |
https://www.ginac.de/ginac.git/?p=ginac.git;a=blob;f=check/check_matrices.cpp;h=91f4b5da4b64ace42691890fbf614f9268d0a467;hb=91f4b5da4b64ace42691890fbf614f9268d0a467 | [
"]> www.ginac.de Git - ginac.git/blob - check/check_matrices.cpp\n91f4b5da4b64ace42691890fbf614f9268d0a467\n1 /** @file check_matrices.cpp\n2 *\n3 * Here we test manipulations on GiNaC's symbolic matrices. They are a\n4 * well-tried resource for cross-checking the underlying symbolic\n5 * manipulations. */\n7 /*\n8 * GiNaC Copyright (C) 1999-2003 Johannes Gutenberg University Mainz, Germany\n9 *\n10 * This program is free software; you can redistribute it and/or modify\n11 * it under the terms of the GNU General Public License as published by\n12 * the Free Software Foundation; either version 2 of the License, or\n13 * (at your option) any later version.\n14 *\n15 * This program is distributed in the hope that it will be useful,\n16 * but WITHOUT ANY WARRANTY; without even the implied warranty of\n17 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n18 * GNU General Public License for more details.\n19 *\n20 * You should have received a copy of the GNU General Public License\n21 * along with this program; if not, write to the Free Software\n22 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n23 */\n25 #include \"checks.h\"\n27 /* determinants of some sparse symbolic matrices with coefficients in\n28 * an integral domain. */\n29 static unsigned integdom_matrix_determinants()\n30 {\n31 unsigned result = 0;\n32 symbol a(\"a\");\n34 for (unsigned size=3; size<22; ++size) {\n35 matrix A(size,size);\n36 // populate one element in each row:\n37 for (unsigned r=0; r<size-1; ++r)\n38 A.set(r,unsigned(rand()%size),dense_univariate_poly(a,5));\n39 // set the last row to a linear combination of two other lines\n40 // to guarantee that the determinant is zero:\n41 for (unsigned c=0; c<size; ++c)\n42 A.set(size-1,c,A(0,c)-A(size-2,c));\n43 if (!A.determinant().is_zero()) {\n44 clog << \"Determinant of \" << size << \"x\" << size << \" matrix \"\n45 << endl << A << endl\n46 << \"was not found to vanish!\" << endl;\n47 ++result;\n48 }\n49 }\n51 return result;\n52 }\n54 /* determinants of some symbolic matrices with multivariate rational function\n55 * coefficients. */\n56 static unsigned rational_matrix_determinants()\n57 {\n58 unsigned result = 0;\n59 symbol a(\"a\"), b(\"b\"), c(\"c\");\n61 for (unsigned size=3; size<9; ++size) {\n62 matrix A(size,size);\n63 for (unsigned r=0; r<size-1; ++r) {\n64 // populate one or two elements in each row:\n65 for (unsigned ec=0; ec<2; ++ec) {\n66 ex numer = sparse_tree(a, b, c, 1+rand()%4, false, false, false);\n67 ex denom;\n68 do {\n69 denom = sparse_tree(a, b, c, rand()%2, false, false, false);\n70 } while (denom.is_zero());\n71 A.set(r,unsigned(rand()%size),numer/denom);\n72 }\n73 }\n74 // set the last row to a linear combination of two other lines\n75 // to guarantee that the determinant is zero:\n76 for (unsigned co=0; co<size; ++co)\n77 A.set(size-1,co,A(0,co)-A(size-2,co));\n78 if (!A.determinant().is_zero()) {\n79 clog << \"Determinant of \" << size << \"x\" << size << \" matrix \"\n80 << endl << A << endl\n81 << \"was not found to vanish!\" << endl;\n82 ++result;\n83 }\n84 }\n86 return result;\n87 }\n89 /* Some quite funny determinants with functions and stuff like that inside. */\n90 static unsigned funny_matrix_determinants()\n91 {\n92 unsigned result = 0;\n93 symbol a(\"a\"), b(\"b\"), c(\"c\");\n95 for (unsigned size=3; size<8; ++size) {\n96 matrix A(size,size);\n97 for (unsigned co=0; co<size-1; ++co) {\n98 // populate one or two elements in each row:\n99 for (unsigned ec=0; ec<2; ++ec) {\n100 ex numer = sparse_tree(a, b, c, 1+rand()%3, true, true, false);\n101 ex denom;\n102 do {\n103 denom = sparse_tree(a, b, c, rand()%2, false, true, false);\n104 } while (denom.is_zero());\n105 A.set(unsigned(rand()%size),co,numer/denom);\n106 }\n107 }\n108 // set the last column to a linear combination of two other columns\n109 // to guarantee that the determinant is zero:\n110 for (unsigned ro=0; ro<size; ++ro)\n111 A.set(ro,size-1,A(ro,0)-A(ro,size-2));\n112 if (!A.determinant().is_zero()) {\n113 clog << \"Determinant of \" << size << \"x\" << size << \" matrix \"\n114 << endl << A << endl\n115 << \"was not found to vanish!\" << endl;\n116 ++result;\n117 }\n118 }\n120 return result;\n121 }\n123 /* compare results from different determinant algorithms.*/\n124 static unsigned compare_matrix_determinants()\n125 {\n126 unsigned result = 0;\n127 symbol a(\"a\");\n129 for (unsigned size=2; size<8; ++size) {\n130 matrix A(size,size);\n131 for (unsigned co=0; co<size; ++co) {\n132 for (unsigned ro=0; ro<size; ++ro) {\n133 // populate some elements\n134 ex elem = 0;\n135 if (rand()%(size/2) == 0)\n136 elem = sparse_tree(a, a, a, rand()%3, false, true, false);\n137 A.set(ro,co,elem);\n138 }\n139 }\n140 ex det_gauss = A.determinant(determinant_algo::gauss);\n141 ex det_laplace = A.determinant(determinant_algo::laplace);\n142 ex det_divfree = A.determinant(determinant_algo::divfree);\n143 ex det_bareiss = A.determinant(determinant_algo::bareiss);\n144 if ((det_gauss-det_laplace).normal() != 0 ||\n145 (det_bareiss-det_laplace).normal() != 0 ||\n146 (det_divfree-det_laplace).normal() != 0) {\n147 clog << \"Determinant of \" << size << \"x\" << size << \" matrix \"\n148 << endl << A << endl\n149 << \"is inconsistent between different algorithms:\" << endl\n150 << \"Gauss elimination: \" << det_gauss << endl\n151 << \"Minor elimination: \" << det_laplace << endl\n152 << \"Division-free elim.: \" << det_divfree << endl\n153 << \"Fraction-free elim.: \" << det_bareiss << endl;\n154 ++result;\n155 }\n156 }\n158 return result;\n159 }\n161 static unsigned symbolic_matrix_inverse()\n162 {\n163 unsigned result = 0;\n164 symbol a(\"a\"), b(\"b\"), c(\"c\");\n166 for (unsigned size=2; size<6; ++size) {\n167 matrix A(size,size);\n168 do {\n169 for (unsigned co=0; co<size; ++co) {\n170 for (unsigned ro=0; ro<size; ++ro) {\n171 // populate some elements\n172 ex elem = 0;\n173 if (rand()%(size/2) == 0)\n174 elem = sparse_tree(a, b, c, rand()%2, false, true, false);\n175 A.set(ro,co,elem);\n176 }\n177 }\n178 } while (A.determinant() == 0);\n179 matrix B = A.inverse();\n180 matrix C = A.mul(B);\n181 bool ok = true;\n182 for (unsigned ro=0; ro<size; ++ro)\n183 for (unsigned co=0; co<size; ++co)\n184 if (C(ro,co).normal() != (ro==co?1:0))\n185 ok = false;\n186 if (!ok) {\n187 clog << \"Inverse of \" << size << \"x\" << size << \" matrix \"\n188 << endl << A << endl\n189 << \"erroneously returned: \"\n190 << endl << B << endl;\n191 ++result;\n192 }\n193 }\n195 return result;\n196 }\n198 unsigned check_matrices()\n199 {\n200 unsigned result = 0;\n202 cout << \"checking symbolic matrix manipulations\" << flush;\n203 clog << \"---------symbolic matrix manipulations:\" << endl;\n205 result += integdom_matrix_determinants(); cout << '.' << flush;\n206 result += rational_matrix_determinants(); cout << '.' << flush;\n207 result += funny_matrix_determinants(); cout << '.' << flush;\n208 result += compare_matrix_determinants(); cout << '.' << flush;\n209 result += symbolic_matrix_inverse(); cout << '.' << flush;\n211 if (!result) {\n212 cout << \" passed \" << endl;\n213 clog << \"(no output)\" << endl;\n214 } else {\n215 cout << \" failed \" << endl;\n216 }\n218 return result;\n219 }"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5729338,"math_prob":0.9453031,"size":5738,"snap":"2023-14-2023-23","text_gpt3_token_len":1751,"char_repetition_ratio":0.1573073,"word_repetition_ratio":0.16759777,"special_character_ratio":0.38741723,"punctuation_ratio":0.20319432,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9964519,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T04:08:05Z\",\"WARC-Record-ID\":\"<urn:uuid:d8bfa2c4-831b-4e9f-8789-bbe37e814832>\",\"Content-Length\":\"81510\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a41dea58-643c-4b46-8064-7b169ee8c841>\",\"WARC-Concurrent-To\":\"<urn:uuid:8df790ac-f340-45a6-8855-2a0cc6c44dcc>\",\"WARC-IP-Address\":\"188.68.41.94\",\"WARC-Target-URI\":\"https://www.ginac.de/ginac.git/?p=ginac.git;a=blob;f=check/check_matrices.cpp;h=91f4b5da4b64ace42691890fbf614f9268d0a467;hb=91f4b5da4b64ace42691890fbf614f9268d0a467\",\"WARC-Payload-Digest\":\"sha1:IZPQQPCHUOJYFBF4WE6A7QJ5RPIODH53\",\"WARC-Block-Digest\":\"sha1:N2ZPL7YOQFQAHJFFK33CF6ODJFMN7BLD\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648322.84_warc_CC-MAIN-20230602040003-20230602070003-00295.warc.gz\"}"} |
http://www.koreascience.or.kr/search.page?keywords=Linear+programming+%28LP%29&pageSize=10&pageNo=1 | [
"• Title, Summary, Keyword: Linear programming (LP)\n\n### Problem Solution of Linear Programming based Neural Network\n\n• Son, Jun-Hyug;Seo, Bo-Hyeok\n• Proceedings of the KIEE Conference\n• /\n• /\n• pp.98-101\n• /\n• 2004\n• Linear Programming(LP) is the term used for defining a wide range of optimization problems in which the objective function to be minimized or maximized is linear in the unknown variables and the constraints are a combination of linear equalities and inequalities. LP problems occur in many real-life economic situations where profits are to be maximized or costs minimized with constraint limits on resources. While the simplex method introduced in a later reference can be used for hand solution of LP problems, computer use becomes necessary even for a small number of variables. Problems involving diet decisions, transportation, production and manufacturing, product mix, engineering limit analysis in design, airline scheduling, and so on are solved using computers. This technique is called Sequential Linear Programming (SLP). This paper describes LP's problems and solves a LP's problems using the neural networks.\n\n### Forest Management Planning by Linear Programming - Timber Harvest Scheduling of a Korean Pine stand - (Linear Programming에 의한 삼림경영계획(森林經營計劃) - 잣나무임분(林分)의 삼림수확계획(森林收穫計劃)을 중심으로 -)\n\n• Woo, Jong Choon\n• Journal of Korean Society of Forest Science\n• /\n• v.80 no.4\n• /\n• pp.427-435\n• /\n• 1991\n• Linear programming(LP) is a well-known method in optimizing timber harvest schedules. This paper describes a linear programming formulation of korean pine stands for timber harvest scheduling problems. Simulation technique and LP were applied to optimize the time and space distribution of the sustained yield for the 10-year forest management planning horizon. Growthfunction of korean pine stands in study area was derived with the yield table. This growthfunction was contained to the simulation model in estimating of changing stand volume conditions for the planning horizon. These estimated values were served as the basic data of LP model, and LP model was formulated with the maximum of periodical harvest volume calculated by the classical yield regulation method (Paulsen-Hundeshagen formula) and the maximum of periodical harvest area calculated for the normal age distribution. The timber harvest schedule was established periodically for each subcompartment of korean pine stands in experiment forest of College of Forestry in Kangweon National University with the here developed LP model.\n\n### Deformable Surface 3D Reconstruction from a Single Image by Linear Programming\n\n• Ma, Wenjuan;Sun, Shusen\n• KSII Transactions on Internet and Information Systems (TIIS)\n• /\n• v.11 no.6\n• /\n• pp.3121-3142\n• /\n• 2017\n• We present a method for 3D shape reconstruction of inextensible deformable surfaces from a single image. The key of our approach is to represent the surface as a 3D triangulated mesh and formulate the reconstruction problem as a sequence of Linear Programming (LP) problems. The LP problem consists of data constraints which are 3D-to-2D keypoint correspondences and shape constraints which are designed to retain original lengths of mesh edges. We use a closed-form method to generate an initial structure, then refine this structure by solving the LP problem iteratively. Compared with previous methods, ours neither involves smoothness constraints nor temporal consistency, which enables us to recover shapes of surfaces with various deformations from a single image. The robustness and accuracy of our approach are evaluated quantitatively on synthetic data and qualitatively on real data.\n\n### Analysis of the Methodology for Linear Programming Optimality Analysis using Metamodelling Techniques\n\n• Lee, Young-Hae;Jeong, Chan-Seok\n• Journal of the military operations research society of Korea\n• /\n• v.25 no.2\n• /\n• pp.1-14\n• /\n• 1999\n• Metamodels using response surface methodology (RSM) are used for the optimality analysis of linear programming (LP). They have the form of a simple polynomial, and predict the optimal objective function value of an LP for various levels of the constraints. The metamodelling techniques for optimality analysis of LP can be applied to large-scale LP models. What is needed is some large-scale application of the techniques to verify how accurate they are. In this paper, we plan to use the large scale LP model, strategic transport optimal routing model (STORM). The developed metamodels of the large scale LP can provide some useful information.\n\n### Development of an LP integrated environment software under MS-DOS (MS-DOS용 선형계획법 통합환경 소프트웨어의 개발)\n\n• 설동렬;박찬규;서용원;박순달\n• Korean Management Science Review\n• /\n• v.12 no.1\n• /\n• pp.125-138\n• /\n• 1995\n• This paper is to develop an integrated environment software on MS-DOS for linear programming. For the purpose, First, the linear programming integrated environment software satisfying both the educational purpose and the professional purpose was designed and constructed on MS-DOS. Second, the text editor with big capacity was developed. The arithmetic form analyser was also developed and connected to the test editor so that users can input data in the arithmetic form. As a result, users can learn and perform linear programming in the linear programming integrated environment software.\n\n### Linear Programming Applications to Managerial Accounting Decision Makings (선형계획법을 이용한 관리회계적 의사결정)\n\n• Song, Han-Sik;Choi, Min-Cheol\n• /\n• v.9 no.4\n• /\n• pp.99-117\n• /\n• 2018\n• This study has investigated Linear Programming (LP) applications to special decision making problems in managerial accounting with the help of spreadsheet Solver tools. It uses scenario approaches to case examples having three products and three resources in make-and-supply business operations, which is applicable to cases having more variables and constraints. Integer Programmings (IP) are applied in order to model situations when products are better valued in integer values or logical constraints are required. Three cases in one-time-only special order decisions include Goal Programming approach, Knapsack problems with 0/1 selections, and fixed-charge 0/1 integer modelling techniques for set-up operation costs. For the decisions in outsourcing problems, opportunity-costs of resources expressed by shadow-prices are considered to determine their precise contributions. It has also shown that the improvement in work-shop operation for an unprofitable product must overcome its 'reduced cost' by the sum of direct manufacturing cost savings and its shadow-price contributions. This paper has demonstrated how various real situations of special decision problem in managerial accounting can be approached without mistakes by using LP's and IP's, and how students both in accounting and management science can acquire LP skills in their education.\n\n### A Mathematical Structure and Formulation of Bottom-up Model based on Linear Programming (온실가스감축정책 평가를 위한 LP기반 상향식 모형의 수리 구조 및 정식화에 대한 연구)\n\n• Kim, Hu Gon\n• Journal of Energy Engineering\n• /\n• v.23 no.4\n• /\n• pp.150-159\n• /\n• 2014\n• Since the release of mid-term domestic GHG goals until 2020, in 2009, some various GHG reduction policies have been proposed. There are two types of modeling approaches for identifying options required to meet greenhouse gas (GHG) abatement targets and assessing their economic impacts: top-down and bottom-up models. Examples of the bottom-up optimization models include MARKAL, MESSAGE, LEAP, and AIM, all of which are developed based on linear programming (LP) with a few differences in user interface and database utilization. In this paper, we suggest a simplified LP formulation and how can build it through step-by-step procedures.\n\n### A Case Study on the Application of Decomposition Principle to a Linear Programming Problem (분할기법(分割技法)을 이용한 선형계획법(線型計劃法)의 응용(應用)에 관한 사례 연구(事例 硏究))\n\n• Yun, In-Jung;Kim, Seong-In\n• IE interfaces\n• /\n• v.1 no.1\n• /\n• pp.1-7\n• /\n• 1988\n• This paper discusses the applicability of the decomposition principle to an LP (Linear Programming) problem. Through a case study on product mix problems in a chemical process of Korean Steel Chemical Co., Ltd., the decomposition algorithm, LP Simplex method and a modified method are compared and evaluated in computation time and number of iterations.\n\n### On Solving the Fuzzy Goal Programming and Its Extension (불분명한 북표계확볍과 그 확장)\n\n• 정충영\n• Journal of the Korean Operations Research and Management Science Society\n• /\n• v.11 no.2\n• /\n• pp.79-87\n• /\n• 1986\n• This paper illustrates a new method to solve the fuzzy goal programming (FGP) problem. It is proved that the FGP proposed by Narasimhan can be solved on the basis of linear programming(LP) model. Narasimhan formulated the FGP problem as a set of $S^{K}$LP problems, each containing 3K constraints, where K is the number of fuzzy goals/constraints. Whereas Hanna formulated the FGP problem as a single LP problem with only 2K constraints and 2K + 1 additional variables. This paper presents that the FGP problem can be transformed with easy into a single LP model with 2K constraints and only one additional variables. And we propose extended FGP :(1) FGP with weights associated with individual goals, (2) FGP with preemptive prioities. The extended FGP has a framework that is identical to that of conventional goal programming (GP), such that the extended FGP can be applied with fuzzy concept to the all areas where GP can be applied.d.\n\n### A New Approach for Forest Management Planning : Fuzzy Multiobjective Linear Programming (삼림경영계획(森林經營計劃)을 위한 새로운 접근법(接近法) : 퍼지 다목표선형계획법(多目標線型計劃法))\n\n• Woo, Jong Choon\n• Journal of Korean Society of Forest Science\n• /\n• v.83 no.3\n• /\n• pp.271-279\n• /\n• 1994\n• This paper descbibes a fuzzy multiobjective linear programming, which is a relatively new approach in forestry in solving forest management problems. At first, the fuzzy set theory is explained briefly and the fuzzy linear programming(FLP) and the fuzzy multiobjective linear programming(FMLP) are introduced conceptionally. With the information obtained from the study area in Thailand, a standard linear programming problem is formulated, and optimal solutions (present net worth) are calculated for four groups of timber price by this LP model, respectively. This LP model is reformulated to a fuzzy multiobjective linear programming model to accommodate uncertain timber values and with this FMLP model a compromise solution is attained. Optimal solutions of four objective functions for four timber price groups and the compromise solution are compared and discussed."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9272438,"math_prob":0.78794587,"size":922,"snap":"2020-34-2020-40","text_gpt3_token_len":161,"char_repetition_ratio":0.11220044,"word_repetition_ratio":0.0,"special_character_ratio":0.17028199,"punctuation_ratio":0.08496732,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9560654,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T03:19:54Z\",\"WARC-Record-ID\":\"<urn:uuid:568679ae-fca8-4bec-a1a1-8ed279df27b9>\",\"Content-Length\":\"268580\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8df0070-2977-47b6-80fa-c0c83f7778f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:26bf7dc5-6612-426a-ac87-958ced1ff75d>\",\"WARC-IP-Address\":\"203.250.218.22\",\"WARC-Target-URI\":\"http://www.koreascience.or.kr/search.page?keywords=Linear+programming+%28LP%29&pageSize=10&pageNo=1\",\"WARC-Payload-Digest\":\"sha1:LTMZS524KHYG2FLEFF7HLLJ5FFYNOSKU\",\"WARC-Block-Digest\":\"sha1:543WAEYFFRVGLYG4EMSGLCLSQ5NHBDVG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401583556.73_warc_CC-MAIN-20200928010415-20200928040415-00367.warc.gz\"}"} |
https://stackoverflow.com/questions/2702564/how-can-i-quickly-sum-all-numbers-in-a-file/13073486 | [
"How can I quickly sum all numbers in a file?\n\nI have a file which contains several thousand numbers, each on it's own line:\n\n``````34\n42\n11\n6\n2\n99\n...\n``````\n\nI'm looking to write a script which will print the sum of all numbers in the file. I've got a solution, but it's not very efficient. (It takes several minutes to run.) I'm looking for a more efficient solution. Any suggestions?\n\n• What was your slow solution? Maybe we can help you figure out what was slow about it. :) – brian d foy Apr 23 '10 at 23:59\n• @brian d foy, I'm too embarrassed to post it. I know why it's slow. It's because I call \"cat filename | head -n 1\" to get the top number, add it to a running total, and call \"cat filename | tail...\" to remove the top line for the next iteration... I have a lot to learn about programming!!! – Mark Roberts Apr 24 '10 at 1:22\n• That's...very systematic. Very clear and straight forward, and I love it for all that it is a horrible abomination. Built, I assume, out of the tools that you knew when you started, right? – dmckee Apr 24 '10 at 2:43\n• full duplicate: stackoverflow.com/questions/450799/… – codeholic Apr 26 '10 at 11:39\n• @MarkRoberts It must have taken you a long while to work that out. It's a very cleaver problem solving technique, and oh so wrong. It looks like a classic case of over think. Several of Glen Jackman's solutions shell scripting solutions (and two are pure shell that don't use things like `awk` and `bc`). These all finished adding a million numbers up in less than 10 seconds. Take a look at those and see how it can be done in pure shell. – David W. Aug 22 '13 at 14:24\n\nFor a Perl one-liner, it's basically the same thing as the `awk` solution in Ayman Hourieh's answer:\n\n`````` % perl -nle '\\$sum += \\$_ } END { print \\$sum'\n``````\n\nIf you're curious what Perl one-liners do, you can deparse them:\n\n`````` % perl -MO=Deparse -nle '\\$sum += \\$_ } END { print \\$sum'\n``````\n\nThe result is a more verbose version of the program, in a form that no one would ever write on their own:\n\n``````BEGIN { \\$/ = \"\\n\"; \\$\\ = \"\\n\"; }\nLINE: while (defined(\\$_ = <ARGV>)) {\nchomp \\$_;\n\\$sum += \\$_;\n}\nsub END {\nprint \\$sum;\n}\n-e syntax OK\n``````\n\nJust for giggles, I tried this with a file containing 1,000,000 numbers (in the range 0 - 9,999). On my Mac Pro, it returns virtually instantaneously. That's too bad, because I was hoping using `mmap` would be really fast, but it's just the same time:\n\n``````use 5.010;\nuse File::Map qw(map_file);\n\nmap_file my \\$map, \\$ARGV;\n\n\\$sum += \\$1 while \\$map =~ m/(\\d+)/g;\n\nsay \\$sum;\n``````\n• Wow, that shows a deep understanding on what code -nle actually wraps around the string you give it. My initial thought was that you shouldn't post while intoxicated but then I noticed who you were and remembered some of your other Perl answers :-) – paxdiablo Apr 23 '10 at 23:52\n• -n and -p just put characters around the argument to -e, so you can use those characters for whatever you want. We have a lot of one-liners that do interesting things with that in Effective Perl Programming (which is about to hit the shelves). – brian d foy Apr 23 '10 at 23:56\n• Nice, what are these non-matching curly braces about? – Frank Apr 24 '10 at 6:00\n• -n adds the `while { }` loop around your program. If you put `} ... {` inside, then you have `while { } ... { }`. Evil? Slightly. – jrockway Apr 24 '10 at 8:35\n• Big bonus for highlighting the `-MO=Deparse` option! Even though on a separate topic. – conny Nov 4 '11 at 12:47\n\nYou can use awk:\n\n``````awk '{ sum += \\$1 } END { print sum }' file\n``````\n• program exceeded: maximum number of field sizes: 32767 – leef Nov 2 '12 at 3:50\n• Simple, and doesn't require perl. This is the best answer. – Eddified Jul 8 '14 at 21:12\n• With the `-F '\\t'` option if your fields contain spaces and are separated by tabs. – Ethan Furman Jul 17 '14 at 22:17\n• Please mark this as the best answer. It also works if you want to sum the first value in each row, inside a TSV (tab-separated value) file. – Andrea Oct 27 '16 at 10:42\n\nNone of the solution thus far use `paste`. Here's one:\n\n``````paste -sd+ filename | bc\n``````\n\nAs an example, calculate Σn where 1<=n<=100000:\n\n``````\\$ seq 100000 | paste -sd+ | bc -l\n5000050000\n``````\n\n(For the curious, `seq n` would print a sequence of numbers from `1` to `n` given a positive number `n`.)\n\n• Very nice! And easy to remember – Brendan Maguire Jul 30 '14 at 11:45\n• very unix style solution =) – Peter K May 27 '15 at 11:15\n• `seq 100000 | paste -sd+ - | bc -l` on Mac OS X Bash shell. And this is by far the sweetest and the unixest solution! – Simo A. Mar 2 at 15:54\n\nJust for fun, let's benchmark it:\n\n``````\\$ for ((i=0; i<1000000; i++)) ; do echo \\$RANDOM; done > random_numbers\n\n\\$ time perl -nle '\\$sum += \\$_ } END { print \\$sum' random_numbers\n16379866392\n\nreal 0m0.226s\nuser 0m0.219s\nsys 0m0.002s\n\n\\$ time awk '{ sum += \\$1 } END { print sum }' random_numbers\n16379866392\n\nreal 0m0.311s\nuser 0m0.304s\nsys 0m0.005s\n\n\\$ time { { tr \"\\n\" + < random_numbers ; echo 0; } | bc; }\n16379866392\n\nreal 0m0.445s\nuser 0m0.438s\nsys 0m0.024s\n\n\\$ time { s=0;while read l; do s=\\$((s+\\$l));done<random_numbers;echo \\$s; }\n16379866392\n\nreal 0m9.309s\nuser 0m8.404s\nsys 0m0.887s\n\n\\$ time { s=0;while read l; do ((s+=l));done<random_numbers;echo \\$s; }\n16379866392\n\nreal 0m7.191s\nuser 0m6.402s\nsys 0m0.776s\n\n\\$ time { sed ':a;N;s/\\n/+/;ta' random_numbers|bc; }\n^C\n\nreal 4m53.413s\nuser 4m52.584s\nsys 0m0.052s\n``````\n\nI aborted the sed run after 5 minutes\n\nI've been diving to , and it is speedy:\n\n``````\\$ time lua -e 'sum=0; for line in io.lines() do sum=sum+line end; print(sum)' < random_numbers\n16388542582.0\n\nreal 0m0.362s\nuser 0m0.313s\nsys 0m0.063s\n``````\n\nand while I'm updating this, ruby:\n\n``````\\$ time ruby -e 'sum = 0; File.foreach(ARGV.shift) {|line| sum+=line.to_i}; puts sum' random_numbers\n16388542582\n\nreal 0m0.378s\nuser 0m0.297s\nsys 0m0.078s\n``````\n\nHeed Ed Morton's advice: using `\\$1`\n\n``````\\$ time awk '{ sum += \\$1 } END { print sum }' random_numbers\n16388542582\n\nreal 0m0.421s\nuser 0m0.359s\nsys 0m0.063s\n``````\n\nvs using `\\$0`\n\n``````\\$ time awk '{ sum += \\$0 } END { print sum }' random_numbers\n16388542582\n\nreal 0m0.302s\nuser 0m0.234s\nsys 0m0.063s\n``````\n• +1: For coming up with a bunch of solutions, and benchmarking them. – David W. Aug 22 '13 at 14:24\n• time cat random_numbers|paste -sd+|bc -l real 0m0.317s user 0m0.310s sys 0m0.013s – rafi wiener Jul 3 '17 at 10:21\n• that should be just about identical to the `tr` solution. – glenn jackman Jul 7 '17 at 12:47\n• Your awk script should execute a bit faster if you use `\\$0` instead of `\\$1` since awk does field splitting (which obviously takes time) if any field is specifically mentioned in the script but doesn't otherwise. – Ed Morton Mar 18 at 13:56\n\nThis works:\n\n``````{ tr '\\n' +; echo 0; } < file.txt | bc\n``````\n• what's the reason for adding `echo 0` after `tr`? – Dhiraj Jul 5 '18 at 17:17\n• Out of `tr` you end up with a trailing `+`: `1+2+3+4+` That would be a syntax error to `bc`. So echo `0` to fix up the syntax: `1+2+3+4+0` – Mark L. Smith Jul 6 '18 at 14:02\n\nAnother option is to use `jq`:\n\n``````\\$ seq 10|jq -s add\n55\n``````\n\n`-s` (`--slurp`) reads the input lines into an array.\n\nThis is straight Bash:\n\n``````sum=0\ndo\n(( sum += line ))\ndone < file\necho \\$sum\n``````\n• this willl work as long as there are no decimals – ghostdog74 Apr 24 '10 at 5:09\n• And it's probably one of the slowest solutions and therefore not so suitable for large amounts of numbers. – David Apr 23 at 16:06\n\nHere's another one-liner\n\n``````( echo 0 ; sed 's/\\$/ +/' foo ; echo p ) | dc\n``````\n\nThis assumes the numbers are integers. If you need decimals, try\n\n``````( echo 0 2k ; sed 's/\\$/ +/' foo ; echo p ) | dc\n``````\n\nAdjust 2 to the number of decimals needed.\n\nI prefer to use GNU datamash for such tasks because it's more succinct and legible than perl or awk. For example\n\n``````datamash sum 1 < myfile\n``````\n\nwhere 1 denotes the first column of data.\n\n• This does not appear to be a standard component as I do not see it in my Ubuntu installation. Would like to see it benchmarked, though. – Steven the Easily Amused Jan 29 '18 at 17:59\n``````cat nums | perl -ne '\\$sum += \\$_ } { print \\$sum'\n``````\n\n(same as brian d foy's answer, without 'END')\n\nJust for fun, lets do it with PDL, Perl's array math engine!\n\n``````perl -MPDL -E 'say rcols(shift)->sum' datafile\n``````\n\n`rcols` reads columns into a matrix (1D in this case) and `sum` (surprise) sums all the element of the matrix.\n\n• How fix Can't locate PDL.pm in @INC (you may need to install the PDL module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 ?)) for fun of course=) – Fortran Apr 29 '17 at 15:01\n• You have to install PDL first, it isn't a Perl native module. – Joel Berger May 1 '17 at 20:06\n\nHere is a solution using python with a generator expression. Tested with a million numbers on my old cruddy laptop.\n\n``````time python -c \"import sys; print sum((float(l) for l in sys.stdin))\" < file\n\nreal 0m0.619s\nuser 0m0.512s\nsys 0m0.028s\n``````\n• A simple list comprehension with a named function is a nice use-case for `map()`: `map(float, sys.stdin)` – sevko May 1 '15 at 19:50\n\nI prefer to use R for this:\n\n``````\\$ R -e 'sum(scan(\"filename\"))'\n``````\n``````sed ':a;N;s/\\n/+/;ta' file|bc\n``````\n``````\\$ perl -MList::Util=sum -le 'print sum <>' nums.txt\n``````\n\nMore succinct:\n\n``````# Ruby\nruby -e 'puts open(\"random_numbers\").map(&:to_i).reduce(:+)'\n\n# Python\npython -c 'print(sum(int(l) for l in open(\"random_numbers\")))'\n``````\n\nPerl 6\n\n``````say sum lines\n``````\n``````~\\$ perl6 -e '.say for 0..1000000' > test.in\n\n~\\$ perl6 -e 'say sum lines' < test.in\n500000500000\n``````\n\nI have not tested this but it should work:\n\n``````cat f | tr \"\\n\" \"+\" | sed 's/+\\$/\\n/' | bc\n``````\n\nYou might have to add \"\\n\" to the string before bc (like via echo) if bc doesn't treat EOF and EOL...\n\n• It doesn't work. `bc` issues a syntax error because of the trailing \"+\" and lack of newline at the end. This will work and it eliminates a useless use of `cat`: `{ tr \"\\n\" \"+\" | sed 's/+\\$/\\n/'| bc; } < numbers2.txt` or `<numbers2.txt tr \"\\n\" \"+\" | sed 's/+\\$/\\n/'| bc` – Dennis Williamson Apr 24 '10 at 2:18\n• `tr \"\\n\" \"+\" <file | sed 's/+\\$/\\n/' | bc` – ghostdog74 Apr 24 '10 at 5:23\n\nAnother for fun\n\n``````sum=0;for i in \\$(cat file);do sum=\\$((sum+\\$i));done;echo \\$sum\n``````\n\nor another bash only\n\n``````s=0;while read l; do s=\\$((s+\\$l));done<file;echo \\$s\n``````\n\nBut awk solution is probably best as it's most compact.\n\nWith Ruby:\n\n``````ruby -e \"File.read('file.txt').split.inject(0){|mem, obj| mem += obj.to_f}\"\n``````\n• Another option (when input is from STDIN) is `ruby -e'p readlines.map(&:to_f).reduce(:+)'`. – nisetama Feb 24 at 22:30\n\nI don't know if you can get a lot better than this, considering you need to read through the whole file.\n\n``````\\$sum = 0;\nwhile(<>){\n\\$sum += \\$_;\n}\nprint \\$sum;\n``````\n• what's the \\$_ mean? – Mark Roberts Apr 23 '10 at 23:39\n• Very readable. For perl. But yeah, it's going to have to be something like that... – dmckee Apr 23 '10 at 23:40\n• `\\$_` is the default variable. The line input operator, `<>`, puts it's result in there by default when you use `<>` in `while`. – brian d foy Apr 23 '10 at 23:53\n• @Mark, `\\$_` is the topic variable--it works like the 'it'. In this case `<>` assigns each line to it. It gets used in a number of places to reduce code clutter and help with writing one-liners. The script says \"Set the sum to 0, read each line and add it to the sum, then print the sum.\" – daotoad Apr 23 '10 at 23:59\n• @Stefan, with warnings and strictures off, you can skip declaring and initializing `\\$sum`. Since this is so simple, you can even use a statement modifier `while`: `\\$sum += \\$_ while <>; print \\$sum;` – daotoad Apr 24 '10 at 0:00\n\nHere's another:\n\n``````open(FIL, \"a.txt\");\n\nmy \\$sum = 0;\nforeach( <FIL> ) {chomp; \\$sum += \\$_;}\n\nclose(FIL);\n\nprint \"Sum = \\$sum\\n\";\n``````\n\nC always wins for speed:\n\n``````#include <stdio.h>\n#include <stdlib.h>\n\nint main(int argc, char **argv) {\nchar *line = NULL;\nsize_t len = 0;\ndouble sum = 0.0;\n\nwhile (read = getline(&line, &len, stdin) != -1) {\nsum += atof(line);\n}\n\nprintf(\"%f\", sum);\nreturn 0;\n}\n``````\n\nTiming for 1M numbers (same machine/input as my python answer):\n\n``````\\$ gcc sum.c -o sum && time ./sum < numbers\n5003371677.000000\nreal 0m0.188s\nuser 0m0.180s\nsys 0m0.000s\n``````\n\nYou can do it with Alacon - command-line utility for Alasql database.\n\nIt works with Node.js, so you need to install Node.js and then Alasql package:\n\nTo calculate sum from TXT file you can use the following command:\n\n``````> node alacon \"SELECT VALUE SUM() FROM TXT('mydata.txt')\"\n``````\n\nIt is not easier to replace all new lines by `+`, add a `0` and send it to the `Ruby` interpreter?\n\n``````(sed -e \"s/\\$/+/\" file; echo 0)|irb\n``````\n\nIf you do not have `irb`, you can send it to `bc`, but you have to remove all newlines except the last one (of `echo`). It is better to use `tr` for this, unless you have a PhD in `sed` .\n\n``````(sed -e \"s/\\$/+/\" file|tr -d \"\\n\"; echo 0)|bc\n``````\n\nJust to be ridiculous:\n\n``````cat f | tr \"\\n\" \"+\" | perl -pne chop | R --vanilla --slave\n``````\n• This one eventually died with \"Error: evaluation nested too deeply: infinite recursion / options(expressions=)?\" for my tests. I would have thought R could do this all by itself. – brian d foy Apr 24 '10 at 0:54\n• Haha nice solution. – Frank Apr 24 '10 at 5:58\n\ncat f | tr \"\\n\" \"+\" | perl -pne chop | R --vanilla --slave\n\nprotected by InianFeb 14 at 10:58\n\nThank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count)."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8753501,"math_prob":0.7925725,"size":1041,"snap":"2019-26-2019-30","text_gpt3_token_len":316,"char_repetition_ratio":0.08100289,"word_repetition_ratio":0.049751244,"special_character_ratio":0.3506244,"punctuation_ratio":0.15517241,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9791008,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-19T20:10:46Z\",\"WARC-Record-ID\":\"<urn:uuid:74c85a87-d2ef-4d0c-b60d-298d3d001d05>\",\"Content-Length\":\"295345\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab457767-3a9b-4ea0-9b40-a1e0208c3c07>\",\"WARC-Concurrent-To\":\"<urn:uuid:988e2c1b-5fb9-42b1-900c-390719f4fb97>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/2702564/how-can-i-quickly-sum-all-numbers-in-a-file/13073486\",\"WARC-Payload-Digest\":\"sha1:DWTMZ73JPTH5DVRQYQJPKAA6IF6ZXK5N\",\"WARC-Block-Digest\":\"sha1:EU4UPZY5JYBD7L3R7BGFJHFVZDUXYIMS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999040.56_warc_CC-MAIN-20190619184037-20190619210037-00016.warc.gz\"}"} |
https://www.electrondepot.com/fpga/questions-about-counter-in-vhdl-38087-.htm | [
"# Questions about counter in VHDL\n\n• posted\n\nI have implemented a 8-bit synchronous counter by VHDL. The result is that the LED display show continuously running the count from 0 to F(in Hex). Now, I need to change the result which the LED display can count from 0 to 9 only. How can I change in the VHDL code? Can anyone answer me? Thanks a lot!! The following VHDL code are about 8-bit synchronous counter: Library IEEE; USE IEEE.std_logic_1164.all; USE IEEE.std_logic_arith.all; USE IEEE.std_logic_unsigned.all;\n\nENTITY counter_eg IS PORT( PB1, clk_25MHz : IN std_logic; led0, led1, led2, led3, led4, led5, led6 : OUT std_logic); END counter_eg;\n\nARCHITECTURE c OF counter_eg IS\n\nCOMPONENT counter PORT( Clock, Reset : IN std_logic; count : OUT std_logic_vector(3 DOWNTO 0));\n\nEND COMPONENT;\n\nCOMPONENT dec_7seg PORT( hex_digit : IN std_logic_vector(3 DOWNTO 0); segment_a, segment_b, segment_c, segment_d, segment_e, segment_f, segment_g : OUT std_logic); END COMPONENT;\n\nCOMPONENT clk_div PORT( clock_25MHz : IN std_logic; clock_1MHz : OUT std_logic; clock_100KHz : OUT std_logic; clock_10KHz : OUT std_logic; clock_1KHz : OUT std_logic; clock_100Hz : OUT std_logic; clock_10Hz : OUT std_logic; clock_1Hz : OUT std_logic); END COMPONENT;\n\nSIGNAL count : std_logic_vector(3 DOWNTO 0); SIGNAL clk_1KHz : std_logic; BEGIN c0: clk_div port map (clock_25MHz=> clk_25MHz, clock_1KHz=>clk_1KHz); c1: counter port map (clock=>clk_1KHz, reset=>PB1, count=> count); c2: dec_7seg port map (count, led0, led1, led2, led3, led4, led5, led6); END c;\n\n• posted\n\nI'm not going to do your homework for you but here's a suggestion, how about decoding when the count equals 9 and using this to reset your counter. For your percentage I'm sure you can figure out how to code that in your vhdl.\n\nAlan\n\n• posted\n\nI'll give you a hint, the counter is not in this file, only the intantiation of the counter. but you can decode count=10 and trigger a reset. (in this file) as Alan just said.\n\nAurash\n\nThe result is\n\n• posted\n\nHow about gating the clock to something really fast when the count is 10 -\n\n15 inclusive. The A-F will whizz by so fast, no-one will notice. If you get that to work, you deserve extra marks. HTH, Syms.\n• posted\n\nAlan, I want to ask how to decode in vhdl? When the count is at 9, press reset button is go back to 0 again.\n\nAlan Myler =E5=AF=AB=E9=81=93=EF=BC=9A\n\nHz);\n\nt);\n\n• posted\n\ngating the clcok?? i don't understand!! Can you explain? Thanks!!\n\n• posted\n\nSymon, gating the clcok?? i don't understand!! Can you explain? Thanks!!\n\n• posted\n\n• posted\n\nThat would only appear to count 0..9, a smart marker might say it still shows A..F just that your eye cannot resolve it... I think the answer is to re-write the FONT ROM code, so it only displays 0..9 - This would even pass a cursory test. ;) - and it appears to fully meet the wording of the spec...\n\n-jg\n\n• posted\n\nhi, Aurelian Lazarut I still don't understand what you means. How to decode in VHDL? The following are my counter VHDL code: Library IEEE; USE IEEE.std_logic_1164.all; USE IEEE.std_logic_arith.all; USE IEEE.std_logic_unsigned.all;\n\nEntity Counter IS Port ( Clock, Reset :IN std_logic; count : OUT std_logic_vector(3 DOWNTO 0)); END Counter;\n\nARCHITECTURE a OF Counter IS SIGNAL internal_count: std_logic_vector(3 DOWNTO 0);\n\nBEGIN count intantiation of the counter.\n\nHz);\n\nt);\n\n• posted\n\nJim, Of course! That's an excellent solution. Yet another example of why it's unnecessary to gate clocks. :-) cheers mate, Syms.\n\n• posted\n\nJim, I don't understand what you means. what is Font Rom code? Laura\n\nSymon =E5=AF=AB=E9=81=93=EF=BC=9A\n\nys\n\n• posted\n\nIn your first post, you mentioned a LED display - I assumed 7 Segment as you said 0..F. This needs a lookup table, to convert/map the 4 bit binary, to required LEDs - that table is what I call the Font-Rom. It is missing from the examples you have posted thus far.\n\nHave a look at the ISE webpack, WATCHxx.zip examples : these are a 3 digit stopwatch, and they have all you will need.\n\n-jg\n\n• posted"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5802835,"math_prob":0.5401259,"size":1488,"snap":"2023-40-2023-50","text_gpt3_token_len":449,"char_repetition_ratio":0.21698113,"word_repetition_ratio":0.028301887,"special_character_ratio":0.3030914,"punctuation_ratio":0.2857143,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9824454,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T03:49:37Z\",\"WARC-Record-ID\":\"<urn:uuid:cedbb73c-35b7-40af-9ecb-b05b9908330d>\",\"Content-Length\":\"97056\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:704b000e-a54b-46bb-87a5-955c058de11d>\",\"WARC-Concurrent-To\":\"<urn:uuid:55c33bd5-8155-491f-ba09-8cfb7f2c73c7>\",\"WARC-IP-Address\":\"104.131.180.133\",\"WARC-Target-URI\":\"https://www.electrondepot.com/fpga/questions-about-counter-in-vhdl-38087-.htm\",\"WARC-Payload-Digest\":\"sha1:PYXOQNN5TT7FVIFUGAMJKXLPK3AA4UYX\",\"WARC-Block-Digest\":\"sha1:NV66TJLD6ZVMQKRDAEB7N7ROLMHDGQR7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506676.95_warc_CC-MAIN-20230925015430-20230925045430-00752.warc.gz\"}"} |
https://apboardsolutions.guru/ap-board-6th-class-maths-notes-chapter-6/ | [
"Students can go through AP Board 6th Class Maths Notes Chapter 6 Basic Arithmetic to understand and remember the concepts easily.\n\n## AP State Board Syllabus 6th Class Maths Notes Chapter 6 Basic Arithmetic\n\n→ If a comparison is made by finding the difference between two quantities, it is called comparison by difference.\nEg: Age of Harshita is 11 years and age of Srija is 8 years. Harshita is (11 – 8 = 3) 3 years older than Srija or Srija is 3 years younger than Harshita.\n\n→ If a comparison is made by division it makes more sense than the comparison made by taking the difference.\nEg: If cost a key pad cell phone is Rs. 3000 and another smart phone is Rs. 15000, then the cost of the second phone is five times the cost of the first phone.",
null,
"→ Ratio: Comparison of two quantities of the same type by virtue of division is called ratio. Eg: The weight of Ramu is 24 kg and the weight of the Gopi is 36 kg., then the ratio of weights is 24/36. It can also be written as 24:36 and read as 24 is to 36.\nThe ratio of two numbers ‘a’ and ‘b’ (b ≠ 0) is a ÷ b or a/b or $$\\frac{a}{b}$$ and is denoted as a : b and is read as a is to b.\nIn the ratio a : b the quantities a and b are called the terms of the ratio.\nIn the ratio a : b the quantity a is called the first term or antecedent and b is called the second term or the consequent of the ratio.\nThe value of a fraction remains the same if the numerator and the denominators are multiplied or divided by the same non-zero number so is the ratio.\nThat is if the first term and the second term of a ratio are multiplied or divided by . the same non-zero number.\n3 : 4 = 3 × 5 : 4 × 5 = 15 : 20\nAlso 36 : 24 = 36 – 4 : 24 – 4 = 9 : 6.\n\n→ Ratio in the simplest form or in the lowest terms:\nA ratio a : b is said to be in its simplest form if its terms have no factors in common other than 1. A ratio in the simplest form is also called the ratio in its lowest terms. Generally ratios are expressed in their lowest terms.\nTo express a given ratio in its simplest term, we cancel H.C.F. from both its terms. To find the ratio of two terms, we express the both terms in the same units.\nEg: Ratio of 3 hours and 120 minutes is 3 : 2 (as 120 minutes = 2 hours) or 180 : 120 (as 3 hours = 180 minutes)\nA ratio has no units or it is independent of units used in the quantities compared. The order of terms in a ratio a : b is important a : b ≠ b : a.\n\n→ Equivalent ratio:\nA ratio obtained by multiplying or dividing the antecedent and consequent of a given ratio by the same number is called its equivalent ratio.\nEg: 3 : 4 = 3 × 5 : 4 × 5 = 15 : 20. Here 3 : 4 & 15 : 20 are called equivalent ratios.\nAlso 36 : 24 = 36 ÷ 4 : 24 ÷ 4 = 9 : 6. Here 36 : 24 & 9 : 6 are called equivalent ratios.",
null,
"→ Comparison of ratios: To compare two ratios\na) First express them as fractions\nb) Now convert them to like fractions\nc) Compare the like fractions\n\n→ Proportion:\nIf two ratios are equal, then the four terms of these ratios are said to be in proportion. If a : b = c : d, then a, b, c and d are said to be in proportion.\nThis is represented as a : b :: c : d and read as a is b is as c is d.\nThe equality of ratios is called proportion.\nConversely in the proportion a : b :: c : d , the terms a and d are called extremes and b and c are called means.\nIf four quantities are in proportion, then\nProduct of extremes = Product of means .\nIf a : b :: c : d, then a × d = b × c\nFrom this we have",
null,
"→ Unitary method:\nThe method in which first we find the value of one unit and then the value of required number of units is known as unitary method.\nEg: If the cost of 8 books Rs.96, then find the cost of 15 books.\nCost of one book = 96/8 = 12 Cost of 15 books = 12 × 15 = 180\nDistance travelled in a given time = speed × time From this we have",
null,
"",
null,
"→ Percentage:\nThe word per cent means for every hundred or out of hundred. The word percentage is derived from the Latin language. The % symbol is uses to represent percent.\nEg: 5% is read as five percent\n5% = $$\\frac{5}{100}$$ = 0.05\n38% = $$\\frac{38}{100}$$ = 0.38\n\n→ To convert a percentage into a fraction:\na) Drop the % symbol\nb) Divide the number by 100\nEg: 25% = $$\\frac{25}{100}$$ = 0.25 = $$\\frac{1}{4}$$\n\n→ To convert a fraction into percentage:\na) Assign the percentage symbol %\nb) Multiply the given fraction with 100\nEg: $$\\frac{3}{4}$$ = $$\\frac{3}{4}$$ × 100% = 75% = 0.75"
] | [
null,
"https://apboardsolutions.guru/wp-content/uploads/2020/12/AP-Board-Solutions.png",
null,
"https://apboardsolutions.guru/wp-content/uploads/2020/12/AP-Board-Solutions.png",
null,
"https://apboardsolutions.guru/wp-content/uploads/2021/04/AP-Board-6th-Class-Maths-Notes-Chapter-6-Basic-Arithmetic-1.png",
null,
"https://apboardsolutions.guru/wp-content/uploads/2021/04/AP-Board-6th-Class-Maths-Notes-Chapter-6-Basic-Arithmetic-2.png",
null,
"https://apboardsolutions.guru/wp-content/uploads/2020/12/AP-Board-Solutions.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94214386,"math_prob":0.9988071,"size":4347,"snap":"2022-05-2022-21","text_gpt3_token_len":1248,"char_repetition_ratio":0.12502879,"word_repetition_ratio":0.06903991,"special_character_ratio":0.31930068,"punctuation_ratio":0.12372188,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99994934,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T11:31:43Z\",\"WARC-Record-ID\":\"<urn:uuid:e959c883-9a93-4b35-afa6-9137473730ee>\",\"Content-Length\":\"36758\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:054b8075-f411-4b87-a071-41014d055f7d>\",\"WARC-Concurrent-To\":\"<urn:uuid:563eb782-899d-4500-859b-d047e8005a66>\",\"WARC-IP-Address\":\"143.244.140.115\",\"WARC-Target-URI\":\"https://apboardsolutions.guru/ap-board-6th-class-maths-notes-chapter-6/\",\"WARC-Payload-Digest\":\"sha1:IHNTJNLUCYOMYXBTSW3646ZEGQHATDFG\",\"WARC-Block-Digest\":\"sha1:BZCZCD3DJEADQBCSF5P5D2WL7SVESQ4K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662527626.15_warc_CC-MAIN-20220519105247-20220519135247-00193.warc.gz\"}"} |
https://publications.iitm.ac.in/publication/student-course-allocation-with-constraints | [
"",
null,
"X\nStudent Course Allocation with Constraints\nPublished in Springer\n2019\nVolume: 11544 LNCS\n\nPages: 51 - 68\nAbstract\nReal-world matching scenarios, like the matching of students to courses in a university setting, involve complex downward-feasible constraints like credit limits, time-slot constraints for courses, basket constraints (say, at most one humanities elective for a student), in addition to the preferences of students over courses and vice versa, and class capacities. We model this problem as a many-to-many bipartite matching problem where both students and courses specify preferences over each other and students have a set of downward-feasible constraints. We propose an Iterative Algorithm Framework that uses a many-to-one matching algorithm and outputs a many-to-many matching that satisfies all the constraints. We prove that the output of such an algorithm is Pareto-optimal from the student-side if the many-to-one algorithm used is Pareto-optimal from the student side. For a given matching, we propose a new metric called the Mean Effective Average Rank (MEAR), which quantifies the goodness of allotment from the side of the students or the courses. We empirically evaluate two many-to-one matching algorithms with synthetic data modeled on real-world instances and present the evaluation of these two algorithms on different metrics including MEAR scores, matching size and number of unstable pairs. © 2019, Springer Nature Switzerland AG.\nJournal Data powered by TypesetLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Data powered by TypesetSpringer 03029743 No\nAuthors (1)\n•",
null,
"Concepts (11)\n•",
null,
"Iterative methods\n•",
null,
"Pareto principle\n•",
null,
"Bipartite matching problems\n•",
null,
"COURSE ALLOCATION\n•",
null,
"Iterative algorithm\n•",
null,
"MANY TO MANY\n•",
null,
"MATCHING ALGORITHM\n•",
null,
"Pareto-optimal\n•",
null,
"Synthetic data\n•",
null,
"UNIVERSITY SETTINGS\n•",
null,
"Students"
] | [
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/ico-hamburger.svg",
null,
"https://typeset-partner-institution.s3-us-west-2.amazonaws.com/iit-madras/authors/Meghana-Nasre.png",
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/Concept-default.svg",
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/Concept-default.svg",
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/Concept-default.svg",
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/Concept-default.svg",
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/Concept-default.svg",
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/Concept-default.svg",
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/Concept-default.svg",
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/Concept-default.svg",
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/Concept-default.svg",
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/Concept-default.svg",
null,
"https://d5a9y5rnan99s.cloudfront.net/tds-static/images/Concept-default.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9018555,"math_prob":0.6960551,"size":1350,"snap":"2022-27-2022-33","text_gpt3_token_len":254,"char_repetition_ratio":0.14710252,"word_repetition_ratio":0.0,"special_character_ratio":0.17925926,"punctuation_ratio":0.07725322,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.967807,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,8,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T11:11:03Z\",\"WARC-Record-ID\":\"<urn:uuid:45cd2fda-d4fe-445d-9e30-37fab9b58801>\",\"Content-Length\":\"130075\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02e62f00-2207-4ab7-980f-69c3d50c6e5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:e21df782-e3ad-49f8-9424-27276701e030>\",\"WARC-IP-Address\":\"54.218.38.100\",\"WARC-Target-URI\":\"https://publications.iitm.ac.in/publication/student-course-allocation-with-constraints\",\"WARC-Payload-Digest\":\"sha1:2QE5PYUQRQA2PM2LXQIFHOEIT7MD2PUZ\",\"WARC-Block-Digest\":\"sha1:LQQOARZTY25W2A3WH2IX44XS5VV7YLEW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570921.9_warc_CC-MAIN-20220809094531-20220809124531-00411.warc.gz\"}"} |
https://www.vackergroup.ae/our-products/heaters/electric-heater/ | [
"# Electric heater\n\nThe Electric heaters use electric coils to produce heat. They have thermostats to control the required temperature. These electric heaters are for buildings, warehouses, tents, construction sites, accommodation camps etc.\n\n## Our models of Electric Heaters\n\nOur major models of Electric heaters are listed below:\n\n1. ### Specifications of Electric Heater Model VAC-TEH 20 T\n\n1. This electric heater has a heating capacity of 2.5 kW in single-stage control.\n2. The maximum capacity stage is 2,150 kCal.",
null,
"3. The capacity of this electric heater at the maximum stage is 2.5 kW.\n4. The maximum amount of air in the maximum stage is 300 m³/h.\n5. Air pressure in the maximum stage is 40 Pa.\n6. The maximum temperature increase for this electric heater is 40 °C.\n7. It has an axial fan with one stage.\n8. It operates in an Electrical Mains connection of 230 V/50 Hz.\n9. The nominal current consumption is 10.9 Amps.\n10. Recommended fusing is 16 Amps.\n11. The power input of this electric heater is 2.5 kW.\n12. Electric connection is through a connection plug type CEE 7/7.\n13. The cable length provided along with the heater is 2.6 meters.\n14. It has a sound of 63 dB(A) at a distance of 1 meter.\n15. It has a Hose connector & Hose connection\n16. The Diameter of the hose of this electric heater is 155 mm.\n17. The maximum length of the hose is 7 meters.\n18. The dimensions of this electric heater are 280 mm x 260 mm x 300mm.\n19. The weight of this electric heater is 10.6 kg without packing.\n20. It has an adjustable thermostat to control the heat.\n2. ### Specifications of Electric Heater Model VAC-TEH 30 T\n\n1. This electric heater has a heating capacity of 3.3 kW in single-stage control.\n2. The maximum capacity stage is 2,837 kCal.",
null,
"3. The capacity of this electric heater at the maximum stage is 3.3 kW.\n4. The maximum amount of air in the maximum stage is 300 m³/h.\n5. Air pressure in the maximum stage is 40 Pa.\n6. The maximum temperature increase for this electric heater is 54 °C.\n7. It has an axial fan with one stage.\n8. It operates in an Electrical Mains connection of 230 V/50 Hz.\n9. The nominal current consumption is 14.4 Amps.\n10. Recommended fusing is 16 Amps.\n11. The power input of this electric heater is 3.3 kW.\n12. Electric connection is through a connection plug type CEE 7/7.\n13. The cable length provided along with the heater is 2.6 meters.\n14. It has a sound of 63 dB(A) at a distance of 1 meter.\n15. It has a Hose connector & Hose connection\n16. The Diameter of the hose of this electric heater is 155 mm.\n17. The maximum length of the hose is 7 meters.\n18. The dimensions of this electric heater are 280 mm x 260 mm x 300mm.\n19. The weight of this electric heater is 10.6 kg without packing.\n20. It has an adjustable thermostat to control the heat.\n3. ### Specifications of Electric Heater Model VAC-TEH 70 T\n\n1. This electric heater has a Heating capacity of 6,9 & 12 in three-stage control.\n2. The maximum capacity stage is 10,300 kCal.",
null,
"3. The capacity of this electric heater at the maximum stage is 12 kW.\n4. The maximum amount of air in the maximum stage is 1,258m³/h.\n5. Air pressure in the maximum stage is 110 Pa.\n6. The maximum temperature increase for this electric heater is 35 °C.\n7. It has an axial fan with one stage.\n8. It operates in an Electrical Mains connection of 440 V/50 Hz.\n9. The nominal current consumption is 18 Amps.\n10. Recommended fusing is 32 Amps.\n11. The power input of this electric heater is 12 kW.\n12. Electric connection is through a connection plug type CEE 32A – 5 Pin.\n13. The cable length provided along with the heater is 2.6 meters.\n14. It has a sound of 66 dB(A) at a distance of 1 meter.\n15. It has a Hose connector & Hose connection\n16. The Diameter of the hose of this electric heater is 300 mm.\n17. The maximum length of the hose is 15 meters.\n18. The dimensions of this electric heater are 400 mm x 400 mm x 480mm.\n19. The weight of this electric heater is 29.5 kg without packing.\n20. It has an adjustable thermostat to control the heat.\n4. ### Specifications of Electric Heater Model VAC-TEH 100\n\n1. This electric heater has a Heating capacity of 9,13.5 & 18 in three-stage control.\n2. The maximum capacity stage is 15,400 kCal.",
null,
"3. The capacity of this electric heater at the maximum stage is 18 kW.\n4. The maximum amount of air in the maximum stage is 1,785 m³/h.\n5. Air pressure in the maximum stage is 130 Pa.\n6. The maximum temperature increase for this electric heater is 38 °C.\n7. It has an axial fan with one stage.\n8. It operates in an Electrical Mains connection of 440 V/50 Hz.\n9. The nominal current consumption is 27.2 Amps.\n10. Recommended fusing is 32 Amps.\n11. The power input of this electric heater is 18 kW.\n12. Electric connection is through a connection plug type CEE 32A – 5 Pin.\n13. The cable length provided along with the heater is 2.6 meters.\n14. It has a sound of 69 dB(A) at a distance of 1 meter.\n15. It has a Hose connector & Hose connection\n16. The Diameter of the hose of this electric heater is 300 mm.\n17. The maximum length of the hose is 15 meters.\n18. The dimensions of this electric heater are 400 mm x 400 mm x 480mm.\n19. The weight of this electric heater is 29.5 kg without packing.\n20. It has an adjustable thermostat to control the heat.",
null,
"#### Comparisons of different models of Electric Heaters\n\n Model Nos: VAC-TEH 20 T VAC-TEH 30 T VAC-TEH 70 VAC-TEH 100 Heating capacity Stage 1 [kW] 2.5 3.3 0 0 Stage 2 [kW] 6 9 Stage 3 [kW] 9 13.5 Stage 4 [kW] 12 18 Stage Max. [kcal] 2,150 2,837 10,300 15,400 Stage Max. [kW] 2.5 12 18 Amount of air Stage Max. [m³/h] 300 300 1,258 1,785 Air pressure Stage Max. [Pa] 40 40 110 130 Temperature increase in [°C] 40 54 35 38 Fans Axial Yes Yes Yes Yes Radial Number of fan stages 1 1 1 1 Electrical values Mains connection 230 V/50 Hz 230 V/50 Hz 400 V/50 Hz 400 V/50 Hz Nominal current consumption [A] 10.9 14.4 18 27.2 Recommended fusing [A] 16 16 32 32 Power input [kW] 2.5 3.3 12 18 Electric connection Connection plug CEE 7/7 CEE 7/7 CEE 32 A, 5-pin CEE 32 A, 5-pin Provided by the customer Cable length [m] 2.6 2.6 Sound values Distance 1 m [dB(A)] 63 63 66 69 Hose connector Hose connection Diameter [mm] 155 155 300 300 Max. hose length [m] 7 7 15 15 Dimensions Length (packaging excluded) [mm] 280 280 400 400 Width (packaging excluded) [mm] 260 260 400 400 Height (packaging excluded) [mm] 300 300 480 480 Weight (packaging excluded) [kg] 10.6 10.6 29.5 29.5 EQUIPMENT, FEATURES AND FUNCTIONS Operation Thermostat Image",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"## Electric Heaters by Master, Italy (Dantherm Group)\n\nMaster Climate Solutions (Italy) is part of Dantherm Group and is a group company of Dantherm. Their major models are listed below:\n\n1. ### Specification of Electric heater model VAC-B2PTC for poultry farm, warehouses\n\n1. It has a heating power of 1/2 kilowatts.\n2. (3,400-6,800 Btu per hour and 860-1,720 kcal per hour.)\n3. The air displacement is 97 m³ per hour.\n4. It works with 230 Volts and 50 Hertz power supply.\n5. The male plug from the heater side is 16A/3P.\n6. The rated current is 8.7 A.\n7. Switch Pos. 1 is 1 kilowatt and the switch pos. 2 is 2 kilowatt.\n8. It has a thermostat control.\n9. The temperature range is 0-40 °C.\n10. The electronic box protection is IP21.\n11. Box size (l x w x h) is 200 x 200 x 200 mm.\n12. With the net/gross weight of 1.9/2.1 kilograms.\n13. One pallet comes with 160 pcs.\n2. ### Specification of Electric heater model VAC-B 3PTC for warehouses, greenhouses, indoor farming.\n\n1. It has a heating power of 1.5/3 kilowatts.\n2. (3,400-10,200 Btu per hour and 860-2,580 kcal per hour.)\n3. The air displacement is 97 m³ per hour.\n4. It works with 230 Volts and 50 Hertz power supply.\n5. The male plug from the heater side is 16A/3P.\n6. The rated current is 13 A.\n7. Switch Pos. 1 is off and the switch pos. 2 is 1.5 kilowatt.\n8. Switch Pos. 3/4 is 3 kilowatts.\n9. It has a thermostat control.\n10. The temperature range is 0-40 °C.\n11. The electronic box protection is IP21.\n12. Box size (l x w x h) is 244 x 240 x 250 mm.\n13. With the net/gross weight of 3.4/3.7 kilograms.\n14. One pallet comes with 160 pcs.\n3. ### Specification of Electric heater model VAC-B2 for poultry farm, livestock, dairy farm.\n\n1. It has a heating power of 1/2 kilowatts.\n2. (3,400-6,800 Btu per hour and 860-1,720 kcal per hour.)\n3. The air displacement is 184 m³ per hour.\n4. It works with 230 Volts and 50-60 Hertz power supply.\n5. The male plug from the heater side is 16A/3P.\n6. The rated current is 8.7 A.\n7. Switch Pos. 1 is off and the switch pos. 2 is a fan.\n8. Switch Pos. 3 is 1.0/2.0 kilowatts.\n9. It has a thermostat control.\n10. The temperature range is 5-35 °C.\n11. The electronic box protection is IP24.\n12. Product size (l x w x h) is 200 x 200 x 330 mm.\n13. Box size (l x w x h) is 235 x 210 x 340 mm.\n14. With the net/gross weight of 3.7/4.2 kilograms.\n15. One pallet comes with 75 pcs.\n4. ### Specification of Electric heater model VAC-B3 for construction sites\n\n1. It has a heating power of 1.65/3.3 kilowatts.\n2. (5,630-11,260 Btu per hour and 1,430-2,860 kcal per hour.)\n3. The air displacement is 510 m³ per hour.\n4. It works with 230 Volts and 50-60 Hertz power supply.\n5. The male plug from the heater side is 16A/3P.\n6. The rated current is 14.5 A.\n7. Switch Pos. 1 is off and the switch pos. 2 is a fan.\n8. Switch Pos. 3 is 1.65/3.3 kilowatts.\n9. It has a thermostat control.\n10. The temperature range is 5-35 °C.\n11. The electronic box protection is IP24.\n12. Product size (l x w x h) is 260 x 260 x 410 mm.\n13. Box size (l x w x h) is 280 x 270 x 440 mm.\n14. With the net/gross weight of 5.1/5.7 kilograms.\n15. One pallet comes with 48 pcs.\n5. ### Specification of Electric heater model VAC-B5 for construction sites\n\n1. It has a heating power of 2.5/5 kilowatts.\n2. (8,530-17,000 Btu per hour and 2,150-4,300 kcal per hour.)\n3. The air displacement is 510 m³ per hour.\n4. It works with 3 phase 400 Volts and 50 Hertz power supply.\n5. The male plug from the heater side is 16A/5P.\n6. The rated current is 7.2 A.\n7. Switch Pos. 1 is off and the switch pos. 2 is a fan.\n8. Switch Pos. 3 is 2.5/5.0 kilowatts.\n9. It has a thermostat control.\n10. The temperature range is 5-35 °C.\n11. The electronic box protection is IP24.\n12. Product size (l x w x h) is 310 x 360 x 380 mm.\n13. Box size (l x w x h) is 380 x 330 x 440 mm.\n14. With a net/gross weight of 6.4/6.8 kilograms.\n15. One pallet comes with 24 pcs.\n6. ### Specification of Electric heater model VAC-B9 for warehouses, poultry farms, dairy farms, livestock\n\n1. It has a heating power of 4.5/9 kilowatts.\n2. (15,350-30,700 Btu per hour and 3,870-7,740 kcal per hour.)\n3. The air displacement is 800 m³ per hour.\n4. It works with 3 phase 400 Volts and 50 Hertz power supply.\n5. The male plug from the heater side is 16A/5P.\n6. The rated current is 13 A.\n7. Switch Pos. 1 is off and the switch pos. 2 is a fan.\n8. Switch Pos. 3 is 4.5/9.0 kilowatts.\n9. It has a thermostat control.\n10. The temperature range is 5-35 °C.\n11. The electronic box protection is IP24.\n12. Product size (l x w x h) is 340 x 420 x 440 mm.\n13. Box size (l x w x h) is 355 x 440 x 490 mm.\n14. With the net/gross weight of 9.3/10.8 kilograms.\n15. One pallet comes with 20 pcs.\n7. ### Specification of Electric heater model VAC-B15\n\n1. It has a heating power of 7.5/15 kilowatts.\n2. (25,600-51,200 Btu per hour and 6,450-12,900 kcal per hour.)\n3. The air displacement is 1700 m³ per hour.\n4. It works with 3 phase 400 Volts and 50 Hertz power supply.\n5. The male plug from the heater side is 32A/5P.\n6. The rated current is 22 A.\n7. Switch Pos. 1 is off and the switch pos. 2 is a fan.\n8. Switch Pos. 3 is 7.5/15 kilowatts.\n9. It has a thermostat control.\n10. The temperature range is 5-35 °C.\n11. The electronic box protection is IP24.\n12. Product size (l x w x h) is 350 x 470 x 490 mm.\n13. Box size (l x w x h) is 370 x 480 x 530 mm.\n14. With a net/gross weight of 15/15.9 kilograms.\n15. One pallet comes with 12 pcs.",
null,
"#### Product Description of Electrical Heater\n\n1. Brief Title of the device: Electric heaters for buildings, warehouses, tents, construction sites etc.\n2. Brief Description of the device: Electric heaters for increasing ambient temperature using Electric coils for Industrial and Commercial applications. Useful for buildings, warehouses, tents, construction sites etc.\n3. Model number: VAC-TEH70\n4. Brand: VackerGlobal\n5. Seller: VackerGlobal\n6. SKU Number: 1003000075\n7. Price (AED): 4550.00\n8. Price Validity: 30 June 2023"
] | [
null,
"https://www.vackergroup.ae/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.vackergroup.ae/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.vackergroup.ae/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.vackergroup.ae/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.vackergroup.ae/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.vackergroup.ae/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.vackergroup.ae/wp-content/uploads/2018/10/Electric-heater-UAE-Saudi-Qatar-Oman-Kuwait-Africa-VACTEH20T-e1538930049883-150x150.jpg",
null,
"https://www.vackergroup.ae/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.vackergroup.ae/wp-content/uploads/2018/10/Electric-heater-UAE-Saudi-Qatar-Oman-Kuwait-Africa-VACTEH30T-150x150.jpg",
null,
"https://www.vackergroup.ae/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.vackergroup.ae/wp-content/uploads/2018/10/Electric-heater-UAE-Saudi-Qatar-Oman-Kuwait-Africa-VACTEH70-e1538929856552-150x150.jpg",
null,
"https://www.vackergroup.ae/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.vackergroup.ae/wp-content/uploads/2018/10/Electric-heater-UAE-Saudi-Qatar-Oman-Kuwait-Africa-VACTEH100-e1538930003740-150x150.jpg",
null,
"https://www.vackergroup.ae/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.vackergroup.ae/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7734102,"math_prob":0.98456925,"size":11459,"snap":"2023-40-2023-50","text_gpt3_token_len":3281,"char_repetition_ratio":0.18315147,"word_repetition_ratio":0.6577273,"special_character_ratio":0.318876,"punctuation_ratio":0.14110187,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95569795,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,5,null,null,null,5,null,null,null,5,null,null,null,5,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T00:28:25Z\",\"WARC-Record-ID\":\"<urn:uuid:17f4cccd-c00a-4426-a954-3d574b8478d6>\",\"Content-Length\":\"119928\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4a221f6e-366a-4952-8318-250e2f7fe0d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:ebff6dc9-f203-42fa-bbb7-70dc4d868884>\",\"WARC-IP-Address\":\"139.59.85.15\",\"WARC-Target-URI\":\"https://www.vackergroup.ae/our-products/heaters/electric-heater/\",\"WARC-Payload-Digest\":\"sha1:MALNOFG4GIB2XULPNILYWVRRN76T2VHZ\",\"WARC-Block-Digest\":\"sha1:KBUNNJLJQAZGXAICQPYD3G3TISKKACTE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511023.76_warc_CC-MAIN-20231002232712-20231003022712-00440.warc.gz\"}"} |
http://colgatephys111.blogspot.com/2016/11/this-week-when-i-was-laying-around-at.html | [
"Friday, November 25, 2016\n\nPhysics in New Guinness World Record\n\nPhysics in New Guinness World Record\n\nThis week when I was laying around at home I was scrolling my Facebook feed and came across a video of a new Guinness World Record that was recently set. A man from the United Kingdom completed the highest bungee dunk. After watching the video, the first thing that came to mind was the amount of physics that was involved in order to precisely land this jump.\n\nThe physics principles that are involved are conservation of mechanical energy and springs.\nIt says that the total jump was 73.41 m, the man weighs approximately 95 kg, and the spring coefficient of the bungee cord is 60 N/m and the unstretched length is 20 m.\n\nWe can figure out the equilibrium length if the man is just hanging on it:\n\nmg/k = (95 kg)(9.8 m/s)/60 N/m = 15.5 m\n\nWe can also figure out what his fastest speed will be:\n\nmgho = 1/2 mv^2 + mghm + 1/2 kx^2\n\n(95 kg)(9.8m /s^2 )(73.41m) = 1/2 (95 kg) v^2 +(95kg)(9.8m /s^2 )(73.41m − 20m − 15.5m )+ 1/2(60N/m)(15.5)^2\n\nv= 23.3 m/s or 52.2 mph\n\nWe can also figure out how far from the ground he will be:\n\nmgho = 1/2 kx^2 + mghm\n\n(95kg)(9.8m/s^2 )(73.41m) = 1/2(60 N/m)(x)^2 +(95kg)(9.8m/s^2 )(73.41m − 20m − x)\n0=17.5x^2-931x-18620\nx=44.9 m + 20 =64.9 m --> 8.51 m off the ground\n\nHis Max acceleration:\nFmax = mamax = −kx − mg = −(60N ×(−44.9m)) −(95kg× 9.8m/s^2 ) = 1763 N\nF = ma → (95kg)a = 1763N → a = 18.6 m/s^2\nor 3.8 g's"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91645974,"math_prob":0.9928617,"size":1499,"snap":"2019-43-2019-47","text_gpt3_token_len":552,"char_repetition_ratio":0.098996654,"word_repetition_ratio":0.014440433,"special_character_ratio":0.39626417,"punctuation_ratio":0.113513514,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99252695,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-15T02:24:51Z\",\"WARC-Record-ID\":\"<urn:uuid:2641b762-3442-45c3-82a0-88d46eca06a6>\",\"Content-Length\":\"63830\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cecd30d1-96bb-476e-a7f8-1c29506bba6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:b47c2531-652f-4edb-a377-91ec046a5a82>\",\"WARC-IP-Address\":\"172.217.7.225\",\"WARC-Target-URI\":\"http://colgatephys111.blogspot.com/2016/11/this-week-when-i-was-laying-around-at.html\",\"WARC-Payload-Digest\":\"sha1:64RWAWMEJMCIWBHCHMOMDXEY4RWV2CEQ\",\"WARC-Block-Digest\":\"sha1:SQ4V3OIWOLXXFRH2XT63CV4K2VP2GXWT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986655735.13_warc_CC-MAIN-20191015005905-20191015033405-00390.warc.gz\"}"} |
https://barpsi.com/9.6-bar-to-psi | [
"",
null,
"# 9.6 Bar to Psi\n\n## Convert 9.6 Bar to Psi. 9.6 Bar to Psi conversion.\n\n9.6\n\n=\n\n139.236192\n\nLooking to find what is 9.6 Bar in Psi? Want to convert 9.6 Bar units to Psi units?\n\nUsing a simple formula, 9.6 Bar units are equal to 139.236192 Psi units.\n\nWant to convert 9.6 Bar into other Bar units?\n\nBar, Psi, Bar to Psi, Bar in Psi, 9.6 Bar to Psi, 9.6 Bar in Psi"
] | [
null,
"https://barpsi.com/assets/google-play-bar-to-psi.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7706198,"math_prob":0.6628042,"size":284,"snap":"2019-43-2019-47","text_gpt3_token_len":98,"char_repetition_ratio":0.23214285,"word_repetition_ratio":0.036363635,"special_character_ratio":0.38732395,"punctuation_ratio":0.22093023,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9745503,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T17:37:28Z\",\"WARC-Record-ID\":\"<urn:uuid:9ce9c9b9-a60b-4ece-b932-189096e5f75b>\",\"Content-Length\":\"24202\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2924dff2-4836-4a34-af52-ccc5a2840184>\",\"WARC-Concurrent-To\":\"<urn:uuid:ea4f3a06-41aa-4a25-813b-27c22aacd82b>\",\"WARC-IP-Address\":\"142.93.178.22\",\"WARC-Target-URI\":\"https://barpsi.com/9.6-bar-to-psi\",\"WARC-Payload-Digest\":\"sha1:J6TVGGI4J6JBVGOHZDQPV5NIPZBHLLC3\",\"WARC-Block-Digest\":\"sha1:DX42HGJESFDEJVVNL7325T6ZRPEFLMVE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668699.77_warc_CC-MAIN-20191115171915-20191115195915-00483.warc.gz\"}"} |
https://www.tweag.io/blog/2020-01-16-data-vs-control/ | [
"Haskell’s Data and Control module hierarchies have always bugged me. They feel arbitrary. There’s Data.Functor and Control.Monad—why? Monads are, after all, functors. They should belong to the same hierarchy!\n\nI’m not that person anymore. Now, I understand that the intuition behind the Data/Control separation is rooted in a deep technical justification. But—you rightly insist—monads are still functors! So what’s happening here? Well, the truth is that there are two different kinds of functors. But you could never tell them apart because they coincide in regular Haskell.\n\nBut they are different—so let’s split them into two kinds: data functors and control functors. We can use linear-types to show why they are different. Let’s get started.\n\n## Data functors\n\nIf you haven’t read about linear types, you may want to check out Tweag’s other posts on the topic. Notwithstanding, here’s a quick summary: linear types introduce a new type a ⊸ b of linear functions. A linear function is a function that, roughly, uses its argument exactly once.\n\nWith that in mind, let’s consider a prototypical functor: lists.\n\ninstance Functor [] where\nfmap :: (a -> b) -> [a] -> [b]\nfmap f [] = []\nfmap f (a:l) = (f a) : (fmap f l)\n\nHow could we give it a linear type?\n\n• Surely, it’s ok to take a linear function as an argument (if fmap works on any function, it will work on functions which happen to be linear).\n• The f function is, on the other hand, not used linearly: it’s used once per element of a list (of which there can be many!). So the second arrow must be a regular arrow.\n• However, we are calling f on each element of the list exactly once. So it makes sense to make the rightmost arrow linear—exactly once.\n\nSo we get the following alternative type for list’s fmap:\n\nfmap :: (a ⊸ b) -> [a] ⊸ [b]\n\nList is a functor because it is a container of data. It is a data functor.\n\nclass Data.Functor f where\nfmap :: (a ⊸ b) -> f a ⊸ f b\n\nSome data functors can be extended to applicatives:\n\nclass Data.Applicative f where\npure :: a -> f a\n(<*>) :: f (a ⊸ b) ⊸ f a ⊸ f b\n\nThat means that containers of type f a can be zipped together. It also constrains the type of pure: I typically need more than one occurrence of my element to make a container that can be zipped with something else. Therefore pure can’t be linear.\n\nAs an example, vectors of size 2 are data applicatives:\n\ndata V2 a = V2 a a\n\ninstance Data.Functor f where\nfmap f (V2 x y) = V2 (f x) (f y)\n\ninstance Data.Applicative f where\npure x = V2 x x\n(V2 f g) <*> (V2 x y) = V2 (f x) (g y)\n\nLists would almost work, too, but there is no linear way to zip together two lists of different sizes. Note: such an instance would correspond to the Applicative instance of ZipList in base, the Applicative instance for [], in base is definitely not linear (left as an exercise to the reader).\n\n## Control functors\n\nThe story takes an interesting turn when considering monads. There is only one reasonable type for a linear monadic bind:\n\n(>>=) :: m a ⊸ (a ⊸ m b) ⊸ m b\n\nAny other choice of linearization and you will either get no linear values at all (if the continuation is given type a -> m b), or you can’t use linear values anywhere (if the two other arrows are non-linear). In short: if you want the do-notation to work, you need monads to have this precise type.\n\nNow, you may remember that, from (>>=) alone, it is possible to derive fmap:\n\nfmap :: (a ⊸ b) ⊸ m a ⊸ m b\nfmap f x = x >>= (\\a -> return (f a))\n\nBut wait! Something happened here: all the arrows are linear! We’ve just discovered a new kind of functor! Rather than containing data, we see them as wrapping a result value with an effect. They are control functors.\n\nclass Control.Functor m where\nfmap :: (a ⊸ b) ⊸ m a ⊸ m b\n\nclass Control.Applicative m where\npure :: a ⊸ m a -- notice how pure is linear, but Data.pure wasn't\n(<*>) :: m (a ⊸ b) ⊸ m a ⊸ m b\n\n(>>=) :: (>>=) :: m a ⊸ (a ⊸ m b) ⊸ m b\n\nLists are not one of these. Why? Because you cannot map over a list with a single use of the function! (Neither is Maybe because you may drop the function altogether, which is not permitted either.)\n\nThe prototypical example of a control functor is linear State\n\nnewtype State s a = State (s ⊸ (s, a))\n\ninstance Control.Functor (State s) where\nfmap f (State act) = \\s -> on2 f (act s)\nwhere\non2 :: (a ⊸ b) ⊸ (s, a) ⊸ (s, b)\non2 g (s, a) = (s, g b)\n\n## Conclusion\n\nThere you have it. There indeed are two kinds of functors: data and control.\n\n• Data functors are containers: they contain many values; some are data applicatives that let you zip containers together.\n• Control functors contain a single value and are all about effects; some are monads that the do-notation can chain.\n\nThat is all you need to know. Really.\n\nBut if you want to delve deeper, follow me to the next section because there is, actually, a solid mathematical foundation behind it all. It involves a branch of category theory called enriched category theory.\n\nEither way, I hope you enjoyed the post and learned lots. Thanks for reading!\n\n## Appendix: The maths behind it all\n\nBriefly, in a category, you have a collection of objects and sets of morphisms between them. Then, the game of category theory is to replace sets in some part of mathematics, by objects in some category. For example, one can substitute “set” in the definition of group by topological space (topological group) or by smooth manifold (Lie group).\n\nEnriched category theory is about playing this game on the definition of categories itself: a category enriched in $\\mathcal{C}$ has a collection of objects and objects-of-$\\mathcal{C}$ of morphisms between them.\n\nFor instance, we can consider categories enriched in abelian groups: between each pair of objects there is an abelian group of morphisms. In particular, there is at least one morphism, 0, between each pair of objects. The category of vector spaces over a given field (and, more generally, of modules over a given ring) is enriched in abelian groups. Categories enriched in abelian groups are relevant, for instance, to homology theory.\n\nThere is a theorem that all symmetric monoidal closed categories (of which the category of abelian groups is an example) are enriched in themselves. Therefore, the category of abelian groups itself is another example of a category enriched in abelian groups. Crucially for us, the category of types and linear functions is also symmetric monoidal closed. Hence is enriched in itself!\n\nFunctors can either respect this enrichment (in which case we say that they are enriched functors) or not. In the category Hask (seen as a proxy for the category of sets), this theorem is just saying that all functors are enriched because “Set-enriched functor” means the same as “regular functor”. That’s why Haskell without linear types doesn’t need a separate enriched functor type class.\n\nIn the category of abelian groups, the functor which maps $A$ to $A\\otimes A$ is an example of a functor which is not enriched: the map from $A → B$ to $A\\otimes A → B\\otimes B$, which maps $f$ to $f\\otimes f$ is not a group morphism. But the functor from $A$ to $A\\oplus A$ is.\n\nControl functors are the enriched functors of the category of linear functions, while data functors are the regular functors.\n\nHere’s the last bit of insight: why isn’t there a Data.Monad? The mathematical notion of a monad does apply perfectly well to data functors—it just wouldn’t be especially useful in Haskell. We need the monad to be strong for things like the do-notation to work correctly. But, as it happens, a strong functor is the same as an enriched functor, so data monads aren’t strong. Except in Hask, of course, where data monads and control monads, being the same, are, in particular, strong."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8972539,"math_prob":0.9499492,"size":7599,"snap":"2020-45-2020-50","text_gpt3_token_len":1942,"char_repetition_ratio":0.13285056,"word_repetition_ratio":0.043010753,"special_character_ratio":0.24121594,"punctuation_ratio":0.12987843,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996313,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T13:29:05Z\",\"WARC-Record-ID\":\"<urn:uuid:605aa472-0f8b-413f-b8c4-3e30319442d8>\",\"Content-Length\":\"283397\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:908daec3-4a9b-4d86-8631-153c77161cd8>\",\"WARC-Concurrent-To\":\"<urn:uuid:ef2f778a-3869-406b-8234-4bf0cc37b163>\",\"WARC-IP-Address\":\"104.31.69.163\",\"WARC-Target-URI\":\"https://www.tweag.io/blog/2020-01-16-data-vs-control/\",\"WARC-Payload-Digest\":\"sha1:Q37KFFT2SQXQMCWKMJQES43OZVMKMACE\",\"WARC-Block-Digest\":\"sha1:EG6EJ3VEVBJ5RSLENBHAQY336TOFHXLE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107876500.43_warc_CC-MAIN-20201021122208-20201021152208-00699.warc.gz\"}"} |
https://www.hackmath.net/en/example/6871 | [
"# Proportion 2\n\nA car is able to travel 210 km in 3 hours. How far can it travel in 5 hours?\nPut what kind of proportion is this and show your solution.\n\nResult\n\nx = 350 km\n\n#### Solution:",
null,
"Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...):\n\nShowing 0 comments:",
null,
"Be the first to comment!",
null,
"#### To solve this example are needed these knowledge from mathematics:\n\nNeed help calculate sum, simplify or multiply fractions? Try our fraction calculator. Do you want to convert length units? Most natural application of trigonometry and trigonometric functions is a calculation of the triangles. Common and less common calculations of different types of triangles offers our triangle calculator. Word trigonometry comes from Greek and literally means triangle calculation.\n\n## Next similar examples:\n\n1. Candies",
null,
"There are red, blue and green candies in bad. Red to green is in 6:11 ratio and blue to red in a 7: 5 ratio. In what proportion are blue to green candies?\n2. Good swimmer",
null,
"Good swimmer swims 23 m distance with ten shots. With how many shots he swim to an island located 81 m if still swims at the same speed?\n3. Land area",
null,
"A land area of Asia and Africa are in a 3: 2 ratio, the European and African are is 1:3. What are the proportions of Asia, Africa, and Europe?\n4. Masons",
null,
"1 mason casts 30.8 meters square in 8 hours. How long casts 4 masons 178 meters square?\n5. Medians 2:1",
null,
"Median to side b (tb) in triangle ABC is 12 cm long. a. What is the distance of the center of gravity T from the vertex B? b, Find the distance between T and the side b.\n6. Three monks",
null,
"Three medieval monks has task to copy 600 pages of the Bible. One rewrites in three days 1 page, second in 2 days 3 pages and a third in 4 days 2 sides. Calculate for how many days and what day the monks will have copied whole Bible when they begin Wednesd\n7. Scale of the map",
null,
"Determine the scale of the map if the actual distance between A and B is 720 km and distance on the map is 20 cm.\n8. Liters od milk",
null,
"The cylinder-shaped container contains 80 liters of milk. Milk level is 45 cm. How much milk will in the container, if level raise to height 72 cm?\n9. Simple equation 6",
null,
"Solve equation with one variable: X/2+X/3+X/4=X+4\n10. Poplar",
null,
"How tall is a poplar by the river, if we know that 1/5 of its total height is a trunk, 1/10th of the height is the root and 35m from the trunk to the top of the poplar?\n11. Seamstress",
null,
"The seamstress cut the fabric into 3 parts. The first part was the eighth fabric, the second part was three-fifths of the fabric and the third part had a length of 66 cm. Calculate the original length of the fabric.\n12. Equation with x",
null,
"Solve the following equation: 2x- (8x + 1) - (x + 2) / 5 = 9\n13. Inquality",
null,
"Solve inequality: 3x + 6 > 14\n14. Wiring 2",
null,
"Willie cut a piece of wire that was 3/8 of the total length of the wire. He cut another piece that was 9m long. The two pieces together were one half of the total length of the wire. How long was the wire before he cut it?\n15. Tailor",
null,
"Tailor bought 2 3/4 meters of textile and paid 638 CZK. Determine the price per 1 m of the textile.\n16. UN 1",
null,
"If we add to an unknown number his quarter, we get 210. Identify unknown number.\n17. Lengths of the pool",
null,
"Miguel swam 6 lengths of the pool. Mat swam 3 times as far as Miguel. Lionel swam 1/3 as far as Miguel. How many lengths did mat swim?"
] | [
null,
"https://www.hackmath.net/tex/2fa/2fae670ab34de.png",
null,
"https://www.hackmath.net/hashover/images/first-comment.png",
null,
"https://www.hackmath.net/hashover/images/avatar.png",
null,
"https://www.hackmath.net/thumb/56/t_4656.jpg",
null,
"https://www.hackmath.net/thumb/30/t_3630.jpg",
null,
"https://www.hackmath.net/thumb/97/t_2697.jpg",
null,
"https://www.hackmath.net/thumb/96/t_1896.jpg",
null,
"https://www.hackmath.net/thumb/9/t_8209.jpg",
null,
"https://www.hackmath.net/thumb/66/t_2466.jpg",
null,
"https://www.hackmath.net/thumb/27/t_2427.jpg",
null,
"https://www.hackmath.net/thumb/9/t_1809.jpg",
null,
"https://www.hackmath.net/thumb/94/t_7594.jpg",
null,
"https://www.hackmath.net/thumb/46/t_7946.jpg",
null,
"https://www.hackmath.net/thumb/36/t_5636.jpg",
null,
"https://www.hackmath.net/thumb/64/t_3564.jpg",
null,
"https://www.hackmath.net/thumb/5/t_7605.jpg",
null,
"https://www.hackmath.net/thumb/7/t_6807.jpg",
null,
"https://www.hackmath.net/thumb/57/t_2257.jpg",
null,
"https://www.hackmath.net/thumb/40/t_3640.jpg",
null,
"https://www.hackmath.net/thumb/94/t_6994.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9370961,"math_prob":0.98291874,"size":2956,"snap":"2019-13-2019-22","text_gpt3_token_len":827,"char_repetition_ratio":0.104674794,"word_repetition_ratio":0.03697479,"special_character_ratio":0.27401894,"punctuation_ratio":0.1040724,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9854818,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-19T17:25:36Z\",\"WARC-Record-ID\":\"<urn:uuid:bcba2d5f-1e2c-450f-95f6-9d8116b02045>\",\"Content-Length\":\"20044\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1162d5fe-2aef-44f9-8f7f-6a6f615b7708>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b7d07cc-41a9-4af2-967f-edc763166af9>\",\"WARC-IP-Address\":\"104.24.105.91\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/example/6871\",\"WARC-Payload-Digest\":\"sha1:UNEIR5FKI5WXBYJQVLMQCOZLJMSSDWY2\",\"WARC-Block-Digest\":\"sha1:PCJGH2BSLQPTTTK2434CZJ742EDWNSLH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232255071.27_warc_CC-MAIN-20190519161546-20190519183546-00487.warc.gz\"}"} |
http://mathandmultimedia.com/tag/decimal-number-system/ | [
"## How to Change Number Bases Part 2\n\nIn the previous post, we have learned how to change numbers form one base to other. In this post, we are going to discuss more examples of number bases particularly the two number systems used in computers: the binary and the hexadecimal system.\n\nThe Binary Number System\n\nThe binary number system has base 2 and only uses 1 and 0 as digits. The binary number 1101 in expanded form is",
null,
"$1 \\times 2^3 + 1 \\times 2^2 + 0 \\times 2^1 + 1 \\times 2^0$ or » Read more\n\n## How to Change Number Bases Part 1\n\nI have already discussed clock arithmetic, modulo division, and number bases. We further our discussion in this post by learning how to change numbers from one base to another.\n\nThe number system that we are using everyday is called the decimal number system or the base 10 number system (deci means 10). It is believed that this system was developed because we have 10 fingers.\n\nIn the base 10 system, the digits are composed of 0 up to 9. Adding 1 to 9, the largest digit in this system, will give us 10. That is, we replace 9 in the ones place with 0, and add 1 to the tens place which is the next larger place value.\n\nAnother way to write a number in base 10 is by multiplying its digits by powers of 10 and adding them. For example, the number 2578 can be rewritten in expanded form as",
null,
"$2(10^3) + 5(10^2) + 7(10^1) + 8(10^0)$» Read more\n\n## What If We Have 12 Fingers?\n\nOur number system is called the decimal system (deci means 10) because we count in groups of 10’s. This is probably because we have 10 fingers. What do I mean when I said when we count in groups of 10?",
null,
"Our number system has the digits 0 to 9, and then we when we reach the 10th number, we place 1 in the tens place 0 in the ones digit. In the decimal number system, 23 means that we have 2 tens and 3 ones. Similarly, the number 452 means that we have 4 groups hundreds (10 tens), 5 groups of tens and 8 ones. In fact, if we use the expanded notation, 452 is equal to",
null,
"$4 \\times 10^2 + 5 \\times 10^1 + 2 \\times 10^0$.\n\nNotice that each number is multiplied by powers of 10. » Read more"
] | [
null,
"http://s0.wp.com/latex.php",
null,
"http://s0.wp.com/latex.php",
null,
"http://upload.wikimedia.org/wikipedia/commons/thumb/3/32/Human-Hands-Front-Back.jpg/320px-Human-Hands-Front-Back.jpg",
null,
"http://s0.wp.com/latex.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92682403,"math_prob":0.9968373,"size":1836,"snap":"2022-05-2022-21","text_gpt3_token_len":449,"char_repetition_ratio":0.15829694,"word_repetition_ratio":0.011235955,"special_character_ratio":0.25599128,"punctuation_ratio":0.0964467,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99759984,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T10:20:37Z\",\"WARC-Record-ID\":\"<urn:uuid:5b08300d-91ed-48ee-9aeb-44582ee20a94>\",\"Content-Length\":\"38658\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c37fdb37-becd-442e-a1e1-2e3e781b1204>\",\"WARC-Concurrent-To\":\"<urn:uuid:2e9df73c-753e-4f08-b7eb-df594d31c6a5>\",\"WARC-IP-Address\":\"166.62.28.131\",\"WARC-Target-URI\":\"http://mathandmultimedia.com/tag/decimal-number-system/\",\"WARC-Payload-Digest\":\"sha1:SOCG2D2VCA55QJAZIQO5XJNN4JPNO5DE\",\"WARC-Block-Digest\":\"sha1:NBAJBIABGDMGTITO7TYVRNHMMZTLI3GC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662604794.68_warc_CC-MAIN-20220526100301-20220526130301-00000.warc.gz\"}"} |
https://mathematica.stackexchange.com/questions/265260/rectangular-image-segmentation/268469 | [
"# Rectangular Image Segmentation?\n\nI'd like to segment an image such that the components are rectangles. Is this possible out of the box? For example:\n\nimg = CloudGet[CloudObject[\"https://www.wolframcloud.com/obj/1e937fa7-80d2-4db2-8e47-80fe376c0e8f\"]]\nColorize @ WatershedComponents[ ColorConvert[img, \"Grayscale\"] ]",
null,
"You can imagine the output being something like the above but with rectangles instead of polygons, the fewer the better, could look something like this (a poor approximation drawn using Canvas[]):",
null,
"A follow-up would be to specify thresholds and control min and max size/aspects of the rectangles.\n\nUpdate\n\nHere are a few additional examples to test solutions:\n\nmoreTests = CloudGet[\"https://www.wolframcloud.com/obj/\"<>#]&/@\n{\"04b63372-f4cb-4c4f-9161-a8ca581b01fa\",\"68b066cb-9775-44de-aba4-717449293713\",\n\"cb413bbb-56ac-40b3-92dd-2029c0c40b2e\",\"3f4b3af3-ce6e-4512-b917-17b0efa80fc9\",\n\n• The principal singular vectors make an interesting first approximation, but I don't where to go from here: {uu, ss, vv} = SingularValueDecomposition[ImageData@RemoveAlphaChannel@ColorConvert[img0, \"Grayscale\"]]; Colorize@ WatershedComponents[Image[uu[[All, {1}]] . Transpose@vv[[All, {1}]]], Method -> {\"MinimumSaliency\", 0.07}]. Needs something like a follow-up routine to combine some adjacent rectangles. But image-processing is not my thing. Mar 19 at 3:07\n• It looks like a superpixels problem, I haven't seen a square superpixel algorithm yet, a similar effect is: epfl.ch/labs/ivrl/research/slic-superpixels May 17 at 2:35\n• Do you have any guidance on the criteria you want for your segmentation? As stated, the problem is too unconstrained to really come up with a meaningful solution. Let's say your image is a rectangle, split along the diagonal such that on one side of the diagonal it's solid red and the other side of the diagonal is solid blue. What would you want this algorithm to do with that? May 17 at 15:17\n\nModification 2\n\nI have significantly improved the code to get even closer to what M.R. was looking for. Rectangles now grow in width and height, reducing the number of narrow slices. Also, they grow from the four corners of the image. The function has two options:\n\n1. \"RectCount\" allows you to specify the number of rectangles drawn, so you can get a partial result superimposed on the original image.\n2. \"Collage\" (true or false) indicates if a collage of the image covered with rectangles with the original image is desired (default is false).\n(* ===== The function ======= *)\ngrowRects[img_, OptionsPattern[]] :=\nModule[{image, pixelBlockSize, colorDiffLimit, subimages,\nsubColorList, pixelBlockCoordinates, pixelBlockCoordinatesLinear,\nblocksInRect, initialBlockCoordinate, initialBlockColor, maxWidth,\nmaxHeight, lastColumn, lastRow, newColumn,\ninitialBlockCoordinateLinear, linearIndex, available, colorMatched,\nwithinBounds, newColumnAllowed, newRowAllowed, rectsList, lb,\nnewRow, rectColor, t, rt, initialBlockCoordinateLinear2,\ninitialBlockCoordinateLinear3, initialBlockCoordinateLinear4,\nlinearIndex2, linearIndex3, linearIndex4},\n\n(* Users may modify the following to see the effect *)\n\npixelBlockSize = 10;\ncolorDiffLimit = 0.15;\n\n(* Create block size images and a corresponding list of mean color \\\nfor each subimage *)\nsubimages = ImagePartition[img, pixelBlockSize];\nsubColorList =\nTable[\nImageMeasurements[subimages[[i]], \"Mean\"], {i, 1,\nLength[subimages]}];\n\n(* Reassemble the image -\nimage partition may have introduced some cropping *)\n\nimage = ImageAssemble[subimages];\n\n(* get the coordinates in terms of pixel blocks -\nwe will grow rectangles by adding rows or columns of blocks *)\n(*\nwe need four versions to facilitate dealing with four coordinate \\\nsystems (one to draw from each corner) *)\npixelBlockCoordinates = {\nTable[{i, j}, {i, 1, Length[subimages]}, {j, 1,\nLength[subimages[]]}],\nTable[{i, j}, {i, Length[subimages], 1, -1}, {j, 1,\nLength[subimages[]]}],\nTable[{i, j}, {i, 1, Length[subimages]}, {j,\nLength[subimages[]], 1, -1}],\nTable[{i, j}, {i, Length[subimages], 1, -1}, {j,\nLength[subimages[]], 1, -1}]\n};\n\n(* The equivalent linear list of coordinates for each of the four \\\nsystems *)\n\npixelBlockCoordinatesLinear =\nFlatten[#, 1] & /@ pixelBlockCoordinates;\n\n(* area to fill in pixelBlocksSize *)\n{maxWidth, maxHeight} = Dimensions[subimages];\nareaPoints = maxWidth*maxHeight;\n\n(* number of blocks filled *)\nblocksInRect = 0;\n\n(* Using a rectangle counter *)\n\nIf[IntegerQ[count = OptionValue[\"RectCount\"]], useCounter = True;\ncounter = 1, useCounter = False];\n\n(* refers to our four modes of filling rectangles from each corner \\\n- we will rotate later*)\nseedArray = {1, 2, 3, 4};\n\n(* Start the rectangle loop -\neach rectangle starts at the first point of \\\npixelBlockCoordinatesLinear that is not 0 *)\n\nWhile[areaPoints > blocksInRect,\nIf[useCounter && (counter++ > count), Break[]];\n\n(* Each mode has different directions to grow rectangles *)\n\ns = First[seedArray];\nWhich[\n];\nseedArray = RotateLeft[seedArray];\n\n(* color assigned to the next rectangle *)\n\nrectColor = RandomColor[];\nim = Image[\nRandomChoice[{rectColor}, {pixelBlockSize, pixelBlockSize}]];\n\n(* the algorithm ensure that the first non zero element of each \\\nlist of coordinates is the one we want *)\ninitialBlockCoordinate =\nFirstCase[pixelBlockCoordinatesLinear[[s]], Except];\n\n(* find the linear position in each list of coordinate \\\ncorresponding to our starting point *)\n\ninitialBlockCoordinateLinear =\nmaxHeight*(initialBlockCoordinate[] -\n1) + (initialBlockCoordinate[]);\ninitialBlockCoordinateLinear2 =\nmaxHeight*(maxWidth -\ninitialBlockCoordinate[]) + (initialBlockCoordinate[]);\ninitialBlockCoordinateLinear3 =\nmaxHeight*(initialBlockCoordinate[] - 1) + {maxHeight -\ninitialBlockCoordinate[] + 1};\ninitialBlockCoordinateLinear4 =\nmaxHeight*(maxWidth - initialBlockCoordinate[]) + {maxHeight -\ninitialBlockCoordinate[] + 1};\n\n(* remove used blocks from availability in all list *)\n\npixelBlockCoordinatesLinear[][[initialBlockCoordinateLinear]] =\n0;\npixelBlockCoordinatesLinear[][[initialBlockCoordinateLinear2]] =\n0;\npixelBlockCoordinatesLinear[][[initialBlockCoordinateLinear3]] =\n0;\npixelBlockCoordinatesLinear[][[initialBlockCoordinateLinear4]] =\n0;\n\n(* color the initial block with the random color *)\n\nsubimages[[initialBlockCoordinate[]]][[\ninitialBlockCoordinate[]]] = im;\n\n(* this is the color of the subimage at this location *)\n\ninitialBlockColor =\nsubColorList[[initialBlockCoordinate[]]][[\\\ninitialBlockCoordinate[]]];\n\n(* the first block in our rectangle *)\n\nlastColumn = {initialBlockCoordinate};\nlastRow = {initialBlockCoordinate};\nblocksInRect = blocksInRect + 1;\n\n(* Now we can alternate between adding rows and columns *)\nnewColumnAllowed = True;\nnewRowAllowed = True;\n\n(* Start the rectangle growing loop *)\n\nWhile[newColumnAllowed || newRowAllowed,\n\n(* Attempt to add a column - extend the X coordinate -\ntest validity (available blocks, color) *)\nIf[newColumnAllowed,\n(* create new column from existing last column *)\n\nnewColumn = {#[] + cadd, #[]} & /@ lastColumn;\n\n(* is the new column within bounds *)\n\nwithinBounds = AllTrue[newColumn, 1 <= #[] <= maxWidth &];\nIf[withinBounds,\n(* get position ou the points in our four list of coordinates *)\n\nlinearIndex = maxHeight*(#[] - 1) + (#[]) & /@ newColumn;\nlinearIndex2 =\nmaxHeight*(maxWidth - #[]) + (#[]) & /@ newColumn;\nlinearIndex3 =\nmaxHeight*(#[] - 1) + {maxHeight - #[] + 1} & /@\nnewColumn;\nlinearIndex4 =\nmaxHeight*(maxWidth - #[]) + {maxHeight - #[] + 1} & /@\nnewColumn;\n\n(* verifiy that coordinates have not been used before *)\n\navailable =\nNoneTrue[\npixelBlockCoordinatesLinear[][[#]] & /@ linearIndex,\nIntegerQ];\nIf[available,\n(* check color *)\n\ncolorMatched =\nAllTrue[subColorList[[#[], #[]]] & /@ newColumn,\nMax@Abs[Take[#, 3] - Take[initialBlockColor, 3]] <\ncolorDiffLimit &];\nIf[colorMatched,\n(* keep the new column,\nreplacing the last column and adding a point to last row *)\n\nblocksInRect = blocksInRect + Length[newColumn];\nlastColumn = newColumn;\nlastRow = AppendTo[lastRow, Last[lastRow] + {cadd, 0}];\n\n(*\nMark column coordinates as not available in all four lists by \\\nsetting coordinate to 0 *)\n(pixelBlockCoordinatesLinear[[\n1]][[#]] = 0) & /@ linearIndex;\n(pixelBlockCoordinatesLinear[][[#]] = 0) & /@\nlinearIndex2;\n(pixelBlockCoordinatesLinear[][[#]] = 0) & /@\nlinearIndex3;\n(pixelBlockCoordinatesLinear[][[#]] = 0) & /@\nlinearIndex4;\n\n(* convert new column to full size colored blocks *)\n\nFor[i = 1, i <= Length[newColumn], i++,\n\nsubimages[[newColumn[[i]][]]][[newColumn[[i]][]]] =\nim;\n]\n\n, newColumnAllowed = False (* not color matched *)]\n, newColumnAllowed = False (* not available *)]\n, newColumnAllowed = False (* not within bounds *)]\n];\n\n(* Attempt to add a row - extend the Y coordinate -\ntest validity (available blocks, color) *)\nIf[newRowAllowed,\n(* create new row from existing last row *)\n\nnewRow = {#[], #[] + radd} & /@ lastRow;\n\n(* is the new row within bounds *)\n\nwithinBounds = AllTrue[newRow, 1 <= #[] <= maxHeight &];\nIf[withinBounds ,\n(* get position ou the points in our four list of coordinates *)\n\nlinearIndex = maxHeight*(#[] - 1) + (#[]) & /@ newRow;\nlinearIndex2 =\nmaxHeight*(maxWidth - #[]) + (#[]) & /@ newRow;\nlinearIndex3 =\nmaxHeight*(#[] - 1) + {maxHeight - #[] + 1} & /@ newRow;\nlinearIndex4 =\nmaxHeight*(maxWidth - #[]) + {maxHeight - #[] + 1} & /@\nnewRow;\n\n(* verifiy that coordinates have not been used before *)\n\navailable =\nNoneTrue[\npixelBlockCoordinatesLinear[][[#]] & /@ linearIndex,\nIntegerQ];\nIf[available,\n(* check color *)\n\ncolorMatched =\nAllTrue[subColorList[[#[], #[]]] & /@ newRow,\nMax@Abs[Take[#, 3] - Take[initialBlockColor, 3]] <\ncolorDiffLimit &];\nIf[colorMatched,\n(* keep the new row,\nreplacing the last row and adding a point to last column *)\n\nblocksInRect = blocksInRect + Length[newRow];\nlastRow = newRow;\nlastColumn =\n\n(*\nMark row coordinates as not available in all four lists by \\\nsetting coordinate to 0 *)\n(pixelBlockCoordinatesLinear[[\n1]][[#]] = 0) & /@ linearIndex;\n(pixelBlockCoordinatesLinear[][[#]] = 0) & /@\nlinearIndex2;\n(pixelBlockCoordinatesLinear[][[#]] = 0) & /@\nlinearIndex3;\n(pixelBlockCoordinatesLinear[][[#]] = 0) & /@\nlinearIndex4;\n\n(* convert new row to full size colored blocks *)\n\nFor[i = 1, i <= Length[newRow], i++,\nsubimages[[newRow[[i]][]]][[newRow[[i]][]]] = im;\n];\n, newRowAllowed = False (* not color matched *)]\n, newRowAllowed = False (* not available *)]\n, newRowAllowed = False (* not within bounds *)]\n]\n];\n\n];\nassembly = ImageAssemble[subimages];\nIf[OptionValue[\"Collage\"], ImageCollage[{image, assembly}], assembly]\n]\nOptions[growRects] = {\"RectCount\" -> Infinity, \"Collage\" -> False};\n\n\nHere are function calls with examples of results:\n\ngrowRects[img1]\ngrowRects[img1,\"RectCount\"->10]\ngrowRects[img1,\"Collage\"->True]\n... and a few more collages",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Modification (original post, now replaced by the above)\n\nIn the various examples provided, some have 4 channels, some 3 channels. The code now tests for this, so all the examples work. Also, the variable subsize was set at 16 pixels before, but the optimal value depends on the size of the file. It is now set to 1/100 of the width of the image, but you can play with it to optimize a particular image. Unfortunately, this code produces narrow rectangles, which is not exactly what was required.\n\n(* Get the image *)\n\nimg = CloudGet[\nCloudObject[\n\"https://www.wolframcloud.com/obj/1e937fa7-80d2-4db2-8e47-\\\n80fe376c0e8f\"]];\n\n(* Partition the image *)\nsubsize = 1/100*ImageDimensions[img][];\nsubimages = ImagePartition[img, subsize];\n\n(* Measure average color of subimages *)\n\nt = Table[\nImageMeasurements[subimages[[i]], \"Mean\"], {i, 1,\nLength[subimages], 1}];\n\n(* Get positions where color change greater than 0.1 in one direction \\\n*)\n\nchannels = ImageMeasurements[img, \"Channels\"];\nWhich[channels == 4, ch1 = {0., 0., 0., _}; ch2 = {_, _, _, _},\nchannels == 3, ch1 = {0., 0., 0.}; ch2 = {_, _, _}];\n\npv = Reverse[\nPosition[\nTable[\nThreshold[Differences[t[[i]]], {\"Hard\", 0.1}] /. ch1 -> \" \" /.\nch2 -> \"|\", {i, 1, Length[t], 1}], \"|\"], 2];\n\n(* Get positions where color change greater than 0.1 in the other \\\ndirection *)\n\ntrt = Transpose[t];\nph = Position[\nTable[\nThreshold[Differences[trt[[i]]], {\"Hard\", 0.1}] /. ch1 -> \" \" /.\nch2 -> \"_\", {i, 1, Length[trt], 1}], \"_\"];\n\n(* Find intersection *)\ninter = Intersection[pv, ph];\n\n(* Convert the list to image coordinates and add missing points \\\n(where y=0) *)\n\ninterS = {#[], ImageDimensions[img][] - #[]} & /@ (subsize*\ninter);\nfinal = {};\nAppendTo[final, {#[], 0}] & /@ interS;\np = SortBy[DeleteDuplicates@Join[interS, final], {First, Greater}];\n\n(* Function to process list p of coordinates and draw rectangles. h \\\nis the height *)\n\nrectangleSegments[h_, p_List] :=\nModule[{rects, anchor, prevCol, prevRow},\nrects = {};\nanchor = {0, h};\nprevCol = 0;\nprevRow = 0;\n\ni = 1;\nWhile[i <= Length[p],\nIf[\np[[i]][] != prevCol,\nanchor = {prevCol, h},\nanchor = {anchor[], prevRow};\n];\nAppendTo[rects, {anchor, p[[i]]}];\nprevCol = p[[i]][];\nprevRow = p[[i]][];\ni++;\n];\nrects\n]\n\n(* Calling function *)\n\nrseg = rectangleSegments[ImageDimensions[img][], p];\nShow[Graphics[{EdgeForm[Black], FaceForm[RandomColor[]],\nRectangle[#[], #[]]}] & /@ rseg]",
null,
"• I like this approach, if you can make it do the right thing for the additional few test images I posted above, I will accept!\n– M.R.\nMay 20 at 21:23\n• In the six examples I added, whenever there is a big clear space (constant or perhaps a smooth gradient) the algo should produce a larger rectangle that \"fills that space\", not just lots of tiny ones...\n– M.R.\nMay 20 at 21:27\n• Also, the big yellow rect on the left in your example should not have three short ones below it (because the image is all white down to the bottom edge)\n– M.R.\nMay 20 at 21:32\n• I made some modifications (see main window), but I am afraid that the problem filling big clear spaces is not solved. May 20 at 22:59\n• Have you considered using something like ColorQuantize or DominantColors in preprocessing May 20 at 23:07"
] | [
null,
"https://i.stack.imgur.com/mF4i8.png",
null,
"https://i.stack.imgur.com/7bxKx.png",
null,
"https://i.stack.imgur.com/pQb08.jpg",
null,
"https://i.stack.imgur.com/l451n.jpg",
null,
"https://i.stack.imgur.com/ODsic.jpg",
null,
"https://i.stack.imgur.com/MRwOP.jpg",
null,
"https://i.stack.imgur.com/6y8DE.jpg",
null,
"https://i.stack.imgur.com/2ddYi.jpg",
null,
"https://i.stack.imgur.com/4kTak.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.57389724,"math_prob":0.9776522,"size":11891,"snap":"2022-27-2022-33","text_gpt3_token_len":3326,"char_repetition_ratio":0.18103811,"word_repetition_ratio":0.22093023,"special_character_ratio":0.32301742,"punctuation_ratio":0.20586608,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9751273,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T08:08:38Z\",\"WARC-Record-ID\":\"<urn:uuid:b2e09082-8824-46df-ac87-62e30e28df00>\",\"Content-Length\":\"249389\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6317342e-a7fb-4cff-9eab-b2a9ed6c9a21>\",\"WARC-Concurrent-To\":\"<urn:uuid:08385d08-4f5b-4509-b199-be5b41240cd1>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/265260/rectangular-image-segmentation/268469\",\"WARC-Payload-Digest\":\"sha1:X7AXFN7XBBUWZLEN2DZPBJSYZZGJKINM\",\"WARC-Block-Digest\":\"sha1:AFWCDYM2UFWCYXZM7YB5SHC6V6EGZAHU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104215805.66_warc_CC-MAIN-20220703073750-20220703103750-00232.warc.gz\"}"} |
https://5dok.net/document/yr38rpko-evolution-how-long-does-speciation-take.html | [
"# evolution: how long does speciation take?\n\nN/A\nN/A\nProtected\n\nShare \"evolution: how long does speciation take?\"\n\nCopied!\n79\n0\n0\nLaat meer zien ( pagina)\n\nHele tekst\n\n(1)\n\nfaculteit Wiskunde en Natuurwetenschappen\n\n## evolution: how long does speciation take?\n\n### Bachelor thesis Mathematics\n\nJuly 2014\n\nStudent: Gerard Hekkelman\n\nFirst Supervisor Mathematics: Prof. Dr. E.C. Wit Second Supervisor Mathematics : Dr. W.P. Krijnen\n\nSupervisor Centre for Evological and Evolutionary Studies: Prof. Dr. R.S.\n\nEtienne\n\n(2)\n\nAbstract\n\nIn this paper, we study the constant birth-death process and the protracted birth-death process which describes macro-evolution in biology. We derive some properties of these models, and an- alytically derive the likelihood of the constant birth-death model. For the protracted birth-death model, we derive an appromation of the likelihood using the concept of Gaussian processes. Using the exact likelihood of the constant birth-death process and the approximated likelihood of the protracted birth-death model, we infer a given phylogenetic tree and discuss the results.\n\n(3)\n\n### Contents\n\nIntroduction 4\n\n1 Birth-death models 6\n\n1.1 Pure Birth Model . . . 6\n\n1.2 Pure Death model . . . 7\n\n1.3 Birth-death model . . . 8\n\n1.4 Protracted Pure Birth Model . . . 9\n\n1.5 Protracted Birth-Death Model . . . 10\n\n2 Inference 11 2.1 Inferences of Constant Birth-Death Models . . . 11\n\n2.2 Inference of the Yule Process . . . 12\n\n2.2.1 Confidence intervals for λ . . . 14\n\n2.3 Likelihood of the birth-death process . . . 17\n\n2.3.1 Probability functions of simulated trees. . . 18\n\n2.3.2 Probability Density Function of a Simulated Tree . . . 21\n\n2.3.3 Maximum Likelihood Estimation . . . 22\n\n2.4 Approximating the Likelihood of the Protracted Birth-Death Model . . . 23\n\n2.4.1 Approximation of the Process . . . 23\n\n2.4.2 Approximation of the likelihood . . . 26\n\n3 Simulation studies 27 3.1 Simulation of a Birth-Death Model . . . 27\n\n3.1.1 Basic Properties . . . 27\n\n3.1.2 Results of simulation . . . 32\n\n3.1.3 Inferences . . . 33\n\n3.2 Simulation of a Protracted Birth-Death Model . . . 40\n\n3.2.1 Basic Properties . . . 40\n\n3.2.2 Approximation of the Process . . . 44\n\n3.2.3 Inferences . . . 44\n\n4 Conclusion and Discussion 50 4.1 Conclusion . . . 50\n\n4.2 Discussion and Impossible Improvements . . . 51\n\nA R codes 54 A.1 Review: Other Models and Biological Properties . . . 54\n\nA.1.1 Moran Process . . . 54\n\nA.1.2 Random Walk . . . 54\n\nA.1.3 Shapes of clades . . . 55\n\nA.1.4 Sampling and paraphyly . . . 55\n\nA.1.5 Diversification models . . . 56\n\n(4)\n\nA.1.6 Diversification slowdowns . . . 58 A.2\n\nBirth-Death Simulation Functions . . . 60 A.3\n\nCode for Constant Birth-Death Simulations . . . 62 A.4\n\nCode for Protracted Birth-Death Simulations . . . 72\n\n(5)\n\n### Introduction\n\nA phylogenetic tree (that is, the ancestry, predigree, genealogy) of a group of species represents the evolutionary relatedness between the species, and their common ancestry, for example between human and chimpansee. Currently, DNA data are routinely used to infer the phylogenetic tree.\n\nAn example of an actual phylogenetic tree is given in figure 1.\n\nFigure 1: An example of a phylogenetic tree.\n\nIn this example, we can see that the group of virusses is represented in a tree form. Every branch represents a species, and each node represents a common ancestor for two species in the tree. For example, ”virus2” and ”virus5” have one common ancestor, which is ”virus1”. For these kind of phylogenies, we are interested in estimating the parameters which have their influence on the evolutionary process. However, the phylogenetic trees used as data only contain the genetic information of extant species. By this, we don’t have information of the extant species which existed in the past, but are extinct today. Consider figure 1 again, the data we observe is given in figure 2.\n\nThese trees are called reconstructed phylogenies. We infere these phylogenies to obtain the parameters which cause the growth of a phylogeny. Sophisticated software, often using Bayesian approaches, has been developed for this. One component of this software is the probability of the phylogenetic tree given a model of species diversification (speciation and extinction). There is a lot of debate about the proper model of species diversification.\n\nOne example is a model which assumes that speciation and extinction are instantaneous events,\n\n(6)\n\nFigure 2: An example of a phylogenetic tree.\n\nwith occur with a constant rate through time. This model is called the constant birth-death model.\n\nAnother example is a model that assumes that speciation takes time rather than being instanta- neous (which all other models assume). This model, called the protracted birth-death model, allows us to estimate how long a speciation process takes to complete. However, the current math- ematical representation of the protracted speciation model seems incoherent.\n\nIn this paper we first represent the constant birth-death model. We give the properties of the constant birth-death model and make simulation studies. Thereafter, we make inferences of this reconstructed phylogeny and obtain a maximum likelihood estimate of the diversification rate. Sec- ondly, we introduce the protracted model and search for an approximation of the representation of the protracted model and a way to infer its parameters from a given phylogenetic tree.\n\n(7)\n\n### Birth-death models\n\nThe birth-death models are mathematical models of the dynamical process of speciation and ex- tinction, which are used for informative thinking about macroevolutionary patterns. The pure birth-model and the pure death model are submodels of the birth-death models. We can simply model the growth of a clade through speciation to deduce the rate of speciation, discovering any irregularities. Also less obvious questions can be answered using the birth-death models.\n\nThe use of the birth-death models started back in 1924 to the work of the statistician Yule (Yule 1924), who modelled a clade growing according the pure birth process. Here, extinction does not occur.\n\nThe modern usage of the birth-death models started with the work of Raup and colleagues in 1973 (Raup et al 1973), when computers were starting to become a readily used tool (Nee 2006).\n\nThey modelled a scenario where the probabilities of a species is either speciating or coming to ex- tinct are equal, and the clade size was roughly constant. They discovered that this purely random process could produce trends and patterns resembling those in the fossil records.\n\nWe first investigate several models of macro-evolution, where the constant birth-death model and the protracted birth-death model are the most important ones in this paper.\n\n### 1.1 Pure Birth Model\n\nIn this model, we assume that each species in a clade has a constant rate λ of producing a new species at any point in time and that extinction never occurs. We assume that the duration of speciation Ti is exponentially distributed for each species in the clade:\n\nTb(i) ∼ exp(λ), ∀i = 1, 2, . . . (1.1) Let Ng denote the number of species in the clade, we are interested in the probability density function of the number of species at a time t. The probability that at time t there are Ng species is given by the following stochastic differential equation (Yule 1924):\n\nd\n\ndtPr(Ng; t) = λ(Ng− 1) Pr(Ng− 1; t) − λNgPr(Ng; t) (1.2) This equation is known as the master equation of the pure birth model, or Yule process named after its discoverer George Udny Yule. This equation will be solved analytically later in this paper.\n\nInstead of looking at the probability density function, we can also focus on the expected number of species in the clade at a time t:\n\n(8)\n\nFigure 1.1: George Udny Yule (1871-1951) .\n\nLet E [N (t)] denote the expected number of species in the clade at time t and set E [N0] = E [N (0)] as the expected initial number of species. The expected number of species E [N (t)] grows exponentially over time with a rate λ, hence we have the ODE:\n\nd\n\ndtE [N (t)] = λE [N (t)] , E [N (0)] = E [N0] (1.3) The solution is straightforward:\n\nE [N (t)] = E [N0] eλt (1.4)\n\nAs a consequence, a plot of log N (t) against the time should be linear and the slope of this plot provides an estimate of b, the per-capita birth rate.\n\n### 1.2 Pure Death model\n\nIn this model, we assume that each species in a clade has a constant rate µ of going extinct in time t and that speciation never occurs. We assume that the duration of extinction Ti is exponentially distributed for each species in the clade:\n\nTd(i) ∼ exp(µ), ∀i = 1, 2, . . . (1.5) Let Ng denote the number of species, we are interested in the probability density function of the number of species at a time t. The probability that at time t there are Ngspecies is similiar to the probability density function given in equation 1.2, and it is given implicitely by the following stochastic differential equation:\n\nd\n\ndtPr(Ng; t) = µ(Ng+ 1) Pr(Ng+ 1; t) − µNgPr(Ng; t) (1.6)\n\n(9)\n\nWith initial condition:\n\nPr(Ng= Ng(0); t = 0) = 1 (1.7)\n\nThe equation 1.6 is the master equation of the pure death model. Again, this equation can be solved analytically. First, we look at the expected number of species in the clade at a time t:\n\nLet E [N (t)] denote the expected number of species in the clade at time t and set E [N0] = E [N (0)] as the expected initial number of species. The expected number of species that survive N (t) decays exponentially over time with a rate µ, hence we have the following ODE:\n\nd\n\ndtE [N (t)] = −µE [N (t)] , E [N (0)] = E [N0] (1.8) The solution is straightforward:\n\nE [N (t)] = E [N0] e−µt (1.9)\n\n### 1.3 Birth-death model\n\nIn this model, we assume that each species in a clade speciate with a constant rate λ and that extinction occurs with a constant rate µ in time. We assume that the duration of speciation Tb\n\nand the duration of extinction Td are exponetially distributed for each species:\n\nTb(i) ∼ exp(λ), ∀i = 1, 2, . . . (1.10) Td(i) ∼ exp(µ), ∀i = 1, 2, . . . (1.11) Let Ng denote the number of species, we are interested in the probability density function of the number of species at a time t. The probability that at time t there are Ng species is obtained by Kendall in 1948 (Kendall 1948):\n\nd\n\ndtPr(Ng; t) = λ(Ng− 1) Pr(Ng− 1; t) + µ(Ng+ 1) Pr(Ng+ 1; t)\n\n− (λ + µ)NgPr(Ng; t) (1.12)\n\nWith initial condition:\n\nPr(Ng= Ng(0); t = 0) = 1 (1.13)\n\nThis equation will be solved analytically later in this paper. Again, we look first to the expected number of species in the clade at a time t. We define the net rate of diversification as r := λ − µ.\n\nThe clades grow exponentially at a rate r, we therefore have the following ODE:\n\nd\n\ndtE [N (t)] = (λ − µ)E [N (t)] , E [N (0)] = E [N0] (1.14) The solution is straight forward:\n\nE [N (t)] = E [N0] e(λ−µ)t (1.15)\n\nThe semilog representation of the growth of a molecular phylogeny was expected to be linear with a constant slope r, however when the extinction rate is nonzero it is not. But, over much over the history the plot is expected to be linear with slope r (Nee 2006). Molecular phylogenies are completely based on data of extant species, and species that originated more recently in the past had less time to go extinct. Therefore, the extinction rate µ gets smaller as we approach the present. So as we approach the present, the slope of the semilog presentation is expected to\n\n(10)\n\nFigure 1.2: David George Kendall (1918-2007) .\n\nincrease and assymptotically approach λ. By this, we can estimate λ and µ seperately on intuition (Nee 2006). Notice that in molecular phylogenies, we can only see the history of extant species.\n\nSpecies which have become extinct are not represented in these phylogenies.\n\nWe have two surprises from the birth-death models (Nee 2006):\n\n• We can estimate speciation and extinction rates from molecular phylogenies, even though they do not contain information from extant species.\n\n• We can estimate per-species speciation and extinction rates even from fossil data that are not resolved to a level below that of the genus.\n\n### 1.4 Protracted Pure Birth Model\n\nIn this model we make speciation a protracted process. That is, that each species in a clade pro- duce new species with a rate λ1, but these species are not fully completed. These incipient species become good species with a rate λ2, and give rise to new incipient species by rate λ3. This means that speciation in the protracted birth model takes one extra step with respect to the regular birth model.\n\nTo be more precise: in this model, we assume that each species in a clade speciate with a constant rate λ1 and that speciation is completed by a constant rate λ2. We assume that the duration of speciation Tb and the duration of completion Tc are exponetially distributed for each species:\n\nTb(i) ∼ exp(λ1), ∀i = 1, 2, . . . (1.16) Tc(i) ∼ exp(λ2), ∀i = 1, 2, . . . (1.17)\n\n(11)\n\nLet Ng denote the number of good species (the completed species), and let Ni denote the incipient species (the incomplete species). Now, we are interested in the probability density function of the number of good species Ngand incipient species Niat a time t. The probability that at time t there are Ng good species and Ni incipient species is implicitely obtained (Etienne, Rosindell 2012 ):\n\nd\n\ndtPr(Ng, Ni; t) = λ1NgPr(Ng, Ni− 1; t) + λ3(Ni− 1) Pr(Ng, Ni− 1; t)\n\n+ λ2(Ni+ 1) Pr(Ng− 1, Ni+ 1; t) − (λ1Ng+ (λ2+ λ3)Ni) Pr(Ng, Ni; t) (1.18) With initial condition:\n\nPr(Ng= Ng(0), Ni= 0; t = 0) = 1 (1.19) This model cannot be solved analytically, so we investigate the expected number of good and incipient species first. We obtain the following system of ODE’s:\n\nd dt\n\n\u0014E[Ng; t]\n\nE[Ni; t]\n\n\u0015\n\n=\u0014 0 λ2\n\nλ1 λ3− λ2\n\n\u0015 \u0014E[Ng; t]\n\nE[Ni; t]\n\n\u0015\n\n(1.20) With initial condition:\n\n\u0014E[Ng; t = 0]\n\nE[Ni; t = 0]\n\n\u0015\n\n=\u0014Ng(0) 0\n\n\u0015\n\n(1.21) The general solution of this system is given in (Etienne and Rosindell 2012). The case that λ1 = λ3 is much easier to solve, the solution of this system in the case is straight forward the solution of a linear system ODE:\n\nE[Ng; t] = Ng(0) 1 +λλ1\n\n2\n\nexp(λ1t) + Ng(0) 1 +λλ2\n\n1\n\nexp(−λ2t) (1.22)\n\nE[Ni; t] = Ng(0) 1 +λλ2\n\n1\n\n(exp(λ1t) − exp(λ2t)) (1.23)\n\n### 1.5 Protracted Birth-Death Model\n\nIf we make speciation in the constant birth-death model protracted, analogously to the protracted birth model, we assume that species give birth to incipient species by a rate λ1, incipient species reach completion by a rate λ2, give birth to new incipient species by a rate λ3 and extinction occurs for both species with a rate µ.1\n\nd\n\ndtPr(Ng, Ni; t) = λ1NgPr(Ng, Ni− 1; t) + λ1(Ni− 1) Pr(Ng, Ni− 1; t)\n\n+ λ2(Ni+ 1) Pr(Ng− 1, Ni+ 1; t) + µ(Ng+ 1) Pr(Ng+ 1, Ni; t) + µ(Ni+ 1) Pr(Ng, Ni+ 1; t)\n\n((λ1+ µ)Ng+ (λ1+ λ2+ µ)Ni) Pr(Ng, Ni; t) (1.24) In this paper, we focus on obtaining an approximation of the the probability density function implicitely given in equation 1.24.\n\nThere are many other models describing evolutionary processes, However, they lie beyond our scope in this paper. A review of these models, and some biological properties are given in appendix A.1.\n\n1Note that in the paper of Etienne and Rosindell, they assume that extinction rates may differ for good species with rate µ1and incipient species with rate µ2. Here we assume µ = µ1= µ2.\n\n(12)\n\n### 2.1 Inferences of Constant Birth-Death Models\n\nMolecular systematics produces phylogenies that may have a temporal dimension, thus containing information about the tempo of the clade’s evolution as well as the relationships among taxa (Nee 2001). We are particulary interested in extracting this information. We are able to study the rates of diversification in the clade. Bladwin and Sanderson (1998) used the simple Yule process to study the rate of diversification . The Yule Process is a fairly simple process, but we can use nice statistical approaches for obtaining results under this model.\n\nFor inference, we first must distinguish between actual and reconstructed phylogenies. We note four points (Nee et al. 1994):\n\n• Both phylogenies have the same number of taxa at the present day\n\n• At any point in the past, the number of lineages in the reconstructed phylogeny is less or equal than the number of lineages of the actual phylogeny.\n\n• The number of lineages in the reconstructed phylogeny cannot decrease towards the present, this can happen in the actual phylogeny.\n\n• The reconstructed phylogeny provides timings for when each pair of species has last shared a common ancestor and commences at that point in the past when all present-day species shared their most recent common ancestor.\n\nFor making the decision whether we have to investigate the causes of an apparently high di- versification should be invesigated, it is desirable to whether or not the diversification really is remarkable in reference to some null model.\n\nIn a broad class of models, the number of progreny lineages of any particular ancestral lineages in a reconstructed phylogeny has a geometric distribution (Nee et al. 1994). The derivation of this will be given later in section 2.3.1. We are interested in how many lineages each ancestral lineage gives rise to and if the distribution of progeny lineages fits our geometric expectation.\n\nUnder the constant rate birth-death model, there are interesting properties for both the actual and the reconstructed phylogeny:\n\n• If there was no extinction, the curves representing the number of lineages through time for the actual phylogeny and the reconstructed phylogeny are the same.\n\n• The push of the past is observed as an apparently higher diversification rate at the beginning of the growth of the actual phylogeny. It is a result from the fact that we consider clades which have survived to the present day, and these are the ones which got a ”flying start”\n\nmost of the times.\n\n(13)\n\n• The pull of the present is the observed increase in the diversification rate in the recent past of the reconstructed phylogeny. It is a result of the fact that lineages who arose more recent in the past have had less time to go extinct.\n\n• The slope of both phylogenies is λ − µ most of the time.\n\n• The slope of the reconstructed phylogeny assymptotically approaches the birth rate.\n\n• The pull of the present and the pull of the past increases as the fraction µ/λ approaches one.\n\nUsing reconstructed phylogenies, it is tempting to take a pure birth process intuitively, so d = 0, as a model for the data since there are no extinct species in the phylogeny. However, using likelihood plots in (Nee et al. 1994) show that we cannot exclude the possibility that it is actu- ally nonzero. Using a likelihood surface apporach, we check in section 3.1.3 if this is indeed the case.\n\nWhen only a sample of a clade has been used, so not the whole clade, creates an effect of slowdown in diversification. This effect becomes more pronounced the smaller the sample is (Nee et al. 1994). Lineages that have arisen in the recent past are likely to have fewer progeny than lineages which arose in a more distant past. So, they are less likely to have any progeny represented in the sample which causes the oberserved diversification slowdown.\n\n### 2.2 Inference of the Yule Process\n\nWe first make the following assumptions, before we make inferences with the Yule Process:\n\n• From the time of its origin with two lineages time t ago, the tree has grown according to a Yule Process with parameter λ\n\n• the age of the clade, t, is a fixed variable.\n\nNote that in this way, at each point in time each lineage has the same probability of giving birth to a new lineage. This probability is proportional to the parameter λ, controlling the rate of growth of the tree. Note also that the clade size N is not predetermined, it is a random variable.\n\nWe assume that our data consists if the length of time from each node to the present day , denoted by xi. For example, a clade with four lineages at the end has 3 nodes. We let x2 denote the time between the present and the last ancestor, which is also the age of this monophyletic clade. So we have x2= t. Moreover, we consider in general the following quantities:\n\nsr=\n\nn\n\nX\n\ni=3\n\nxi (2.1)\n\ns = 2x2+\n\nn\n\nX\n\ni=3\n\nxi= 2x2+ sr (2.2)\n\nWe immediately that s represents the sum of all branch lengths in the tree, and sr only repre- sents the sum of the branch lengths, except the two basal branches.\n\nTo obtain the probability density function of the data, we have the following:\n\n• We have n branches, from which the 2 base branches have the same length and are therefore the same. We therefore have (n − 1)! different permuations for the clade.\n\n• We assume the branch lengths are independent exponential random variables\n\n(14)\n\nThe probability for a branch i giving birth after a time xi is:\n\nPr(Xi≥ xi) = Z xi\n\n0\n\nλ exp[−λxi]dxi = exp[−λxi] (2.3) The probability of j lineages in the tree at the time of a birth event is proportional to λj. Hence, for a tree which has 2 birth events after its first node and thus has 3 species at this moment. These events contribute therefore the term 2λ ∗ 3λ tot the likelihood expression. In general: for n species we have n − 1 birth events. So, we have a contribution of the term (n − 1)!λn−2 to the likelihood expression.\n\nThe n branches x, of which the two basal branches are the same, contribute a term exp[−λs]\n\nto the likelihood by the combined probabilities of equation 2.3. Combining this and the latter, we obtain the likelihood for the data:\n\nPr(x, n; λ, t) = (n − 1)!λn−2e−λs (2.4) Actually, we are only interested in the probability density function of x only. The probability of n lineages, given λ and t is obtained by the following:\n\n• A clade starting with two species, has given birth to n − 2 new species.\n\n• Two species did not speciate before time t.\n\n• There occured n − 1 birth events.\n\nBecause we have n lineages and n−1 birth events, there are n−1 different possibilities of getting n lineages. This contributes a (n − 1) term to the likelihood. In the same reasoning as before, we have two branches which did not give birth, contributing the exp(−λt)2= exp(−2λt) term to the likelihood. Finally, the n − 2 branches who gave birth, contributed the (1 − exp[−λt])n−2term to the likelihood. Combining this we get the probability of n lineages in the clade:\n\nPr(n; λ, t) = (n − 1)e−2λt(1 − e−λt)n−2 (2.5) Where the probability density functions obtained are the same as in (Nee 2001). Thus, the probability of x given n, λ and t is:\n\nPr(x; n, λ, t) = Pr(x, n; λ, t)\n\nPr(n; λ, t) =(n − 2)!λn−2e−λsr\n\n(1 − e−λt)n−2) (2.6)\n\nRemark Observe that:\n\nPr(x; n, λ, t) =(n − 2)!λn−2e−λsr\n\n(1 − e−λt)n−2) (2.7)\n\n= (n − 2)!\n\nn\n\nY\n\ni=3\n\nλe−λxi\n\n1 − e−λt (2.8)\n\nTherefore, this is also the probability density function of the order statistics of n − 2 inde- pendent and identically distributed random variables, where the random variables are truncated exponentially distributed: i.e. they have the density:\n\nPr(Xi = xi; λ, t) = λe−λxi\n\n1 − e−λt (2.9)\n\n(15)\n\n### 2.2.1 Confidence intervals for λ\n\nMoran’s Maximum Likelihood Estimation\n\nConsider the likelihood function given in equation 2.4 again. For maximum likelihood estimation, we need both the outcome of our random variables n and x. Taking the natural logarithm of equation 2.4, we obtain the loglikelihood:\n\n`(x, n; λ, t) = log(n − 1)! + (n − 2) log(λ) − λs (2.10) Taking the partial derivative of 2.10 to λ yields us\n\n∂λ`(x, n; λ, t) = n − 2\n\ns − λ (2.11)\n\nWhere setting 2.11 equal to zero yields our maximum likelihood estimate of the parameter λ:\n\nλˆKM=n − 2\n\ns (2.12)\n\nThis estimator has been called the Kendall-Moran estimator, after those who have derived it first. To obtain the variance of this estimator, we look to the inverse of the Fisher information matrix J , which is a scalar in this case:\n\nJ (λ, n) = −E\u0014 ∂2\n\n∂λ2log (Pr(x, n; λ, t)))\n\n\u0015\n\n= n − 2 λ2\n\n⇒ Var(ˆλKM) = λ2\n\nn − 2 (2.13)\n\nTherefore, we can i see by the results in (Dobson and Barnett 2008):\n\nλˆKM∼ N\n\n\u0012 λ, λ2\n\nn − 2\n\n\u0013\n\n(2.14) By this result, we can obtain a two sided 95% confidence interval simply. By properties of the normal distribution, we know that z0.975 = 1.96 = −z0.025. Hence we obtain:\n\n−1.96 <\n\nˆλKM− λ J12 =\n\nλˆKM− λ\n\nλ n−2\n\n< 1.96 (2.15)\n\nFrom this last equation 2.15, we obtain the 95% confidence interval for λ by basic calculus:\n\nˆλKM\n\n1 − 1.96n−2 < λ <\n\nˆλKM\n\n1 + 1.96n−2 (2.16)\n\nKendall’s Maximum Likelihood Estimation\n\nKendall obtained in 1949 a different variance from equation 2.13, λ2\n\n2(eλt− 1) (2.17)\n\nHowever, it is easy to see the relationship between the variances given in equations 2.13 and 2.17, since we can rewrite equation 2.17 as:\n\n(16)\n\nλ2\n\n2(eλt− 1) = λ2\n\n2eλt− 2) = λ2\n\n2E[n] − 2 (2.18)\n\nWhere we treat n now a random variable representing the population, because of the pure- birth assumption. Note that it bega with two ancestral lineages time t ago. To obtain a 95%\n\nconfidence intervalm we proceed similiarly to the Kendall-Moran case. However, we obtain after some calculations:\n\nλˆK= λ\n\n\u0012\n\n1 ± 1.96\n\n2eλt− 2\n\n\u0013\n\n(2.19) This equation can’t be solved to λ analytically. So we can’t derive any explicit confidence interval in this case. However, in any particular case we can obtain these intervals numerically (Nee 2001).\n\nThe variances of Kendall-Moran (equation 2.13) and Kendall (equation 2.17) differ by the as- sumptions they made for both situations. Moran was considering a population of processes that grew untill they reach exactly n lineages. In that case, n is a predetermined variable in the like- lihood (2.4.) and the age of the clade t is a random variable. In this model, the branch lengths, the elements of x, are exponentially distributed. In the Kendall model, the time t was fixed and the number of lineages n was a random variable. In this model the branch lengths are truncated exponentially distributed. Allthough the maximum likelihood estimates are the same for both models, the variances differ. The Kendall model seems to be the appropiated one for inference in this context, corresponding to our original specifications (Nee 2001).\n\nHowever, if we want to take the Moran model then it is not necessary to make use of any approximations because we obtain an exact confidence interval. Notice that an exponential random variable with λ = 0.5 has a chi-squared distribution with two degrees of freedom. Let χ2n,α be the upper α point of the chi-squared distribution with 2n degress of freedom. Then the exact 95%\n\nconfidence interval for λ under the Moran model is (Nee 2001):\n\nχ2(n−2),0.025\n\n2s < λ < χ2(n−2),0.975\n\n2s (2.20)\n\nParadis (1997) suggested a third choice for the variance, where he uses the observed Fisher infor- mation instead of the expected information:\n\nˆλ2P\n\nn − 2 (2.21)\n\nThis variance yields another 95% confidence interval, which is straight forward:\n\nλˆP\n\n\u0012\n\n1 − 1.96\n\n√n − 2\n\n\u0013\n\n< λ < ˆλP\n\n\u0012\n\n1 + 1.96\n\n√n − 2\n\n\u0013\n\n(2.22) The analysis of Paradis differs from all the others. He assumes the branch lengths are expo- nentially distributed, so he is studying the same hypothetical population of processes as Moran.\n\nHowever, Paradis’ maximum likelihood estimate of λ differs:\n\nˆλP = n − 1 P i = 2nxi\n\n(2.23) The numerator is larger by one, and the denumerator smaller by x2 in comparison with the maximum likelihood estimate of Kendall and Moran. This difference is the result of the use of a different likelihood than equation (2.4).\n\n(17)\n\nHey’s Maximum Likelihood Estimation\n\nHey (1992) ignores the length of time between the last node in the tree and the present. That is equivalent to substracting nxn from s. For equation (2.4), this is only a appropiate likelihood corresponding to the Moran model if we assume that a speciation event occured at the present day, such that xn = 0. If this is not suitable but one wished to use Moran’s model, one has to use Hey’s form. From now, we assume that xn= 0 when discussing the Moran model.\n\nLikelihood ratio analysis\n\nObserve the likelihood ratio statistic, which is chi-squared distributed with one degree of freedom (Dobson and Barnett 2008):\n\nW (λ0) = 2[`(ˆλ) − `(λ0)] ∼ χ2(1) (2.24) Where `(λ) is the log likelihood function given in equation (2.10). A 95% confidence interval is given by the set of points λ ∈ Λ such that:\n\nW (λ) < χ20.95(1) = 3.841 (2.25)\n\nWhich can also be solved numerically (Nee 2001).\n\nSummarizing the last sections:\n\n• Kendall’s model correspond to our original specifications of the correct probability model for inference, but the confidence interval it provids is a numerical approximation whose accuracy is unknown. We can get an exact interval, when we discard the information about λ in the clade size.\n\n• Moran’s model provides an exact confidence interval, but the model assumes a fixed clade size and a randomly varying clade age. This does not seem appropiate in the present context.\n\n• The likelihood ratio test analysis and the Paradis variant falls outside the natural development of this topic, because we base our analysis on models in which clade size, age or both are fixed.\n\nBecause non of the confidence intervals presents an overwhelming case for itself, we compare their performances in simulations (Nee 2001). The following was observed:\n\n• The Paradis variant was the least precise variant, since it delivered the largest confidence interval for an equivalent precision as the Moran and Kendall model.\n\n• The truncated exponential model was also dropped for the same reason as the Paradis variant.\n\n• By making the ratio of the variances of Moran and Kendall’s models as a new parameter, we can naturally compare the results of both models for different values of this ratio. Due to the better performances, Moran’s model was the best model.\n\n(18)\n\n### 2.3 Likelihood of the birth-death process\n\nWe set t0= 0 as the time where our phylogeny begins. Also, we set the time T as the time of the present day, and the time t as some arbitrary time between 0 and T . For the birth death process, we can distinguish four related processes (Nee et al. 1994). These processes are plotted in figure 2.1, where the blue line represents a time t > 0 and the red line represents a time T > t:\n\nFigure 2.1: Four related processes within the constant birth-death process.\n\n1. A simple birth death process which may or may not survive to time t.\n\n2. A subset of the realizations of the first process and consists of those realizations which survive to time t between times 0 and T , but may or may not go extinct before the present day.\n\n3. A subset of the second process which do survive to the present.\n\n4. The reconstructed process, derived from the third by pruning the historical record of those lineages which do not have contemporary descendants. This process corresponds to a perfect phylogeny.\n\n(19)\n\n### 2.3.1 Probability functions of simulated trees.\n\nWe define Pr (i; t) as the probability that a process has i lineages at time t. For each of the four processes, we subsript the probabilities by 1 to 4 to clearify on which process we describe.\n\nA crucial probability relevant to both paleontological and molecular phylogenetic data is the probability that a lineage that arose at some time t in the past, still has some descendants at the time T later (Kendall 1948):\n\nP (t, T ) = 1 −µλ\n\n1 −µλe−(λ−µ)(T −t) (2.26)\n\nThe probability equation (2.26) is a crucial probabilty, because a lineage will not appear in a molecular phylogeny if it has no extant descendants.\n\nWe can estimate both composite parameters a = µλ and r = b − d, but in general the estimations of r are much more precise than the estimations of a (Foote 1988, Nee et al 1995a, ). Together with the probability:\n\nut= 1 − e−(λ−µ)t\n\n1 − µλe−(λ−µ)t (2.27)\n\nWe can express Prk(i; t) in terms of ut and P (t, T ) for all four processes. We will show that these process all have a geometric distribution, except process 3, which has the distribution of two independent geometric random variables.\n\nFor the first process, we obtain that the probability of no descendants is equal to 1 minus the probability that a single lineage alive at time 0 has still some descendants at time t. Hence:\n\nPr\n\n1(i, t) =\n\n\u001a 1 − P (0, t) if i = 0\n\nP (0, t)(1 − ut)ui−1t if i > 0 (2.28) From the probability of the first process in equation 2.28, we immediately obtain the probability for process 2 by conditioning the probability on the event that the process survives untill time t:\n\nPr\n\n2(i, t) = Pr(i lineages|no extinction untill t)\n\n=Pr(i lineages ∧ no extinction untill t) Pr(no extinction untill t)\n\n=Pr1(i, t|i > 0) P (0, t)\n\n= (1 − ut)ui−1t (2.29)\n\nFrom the last probability equation 2.29, we obtain the conditional probability Pr3(i, t; T ) for a birth-death process that survives to T. To do this, we compound distribution 2.29 with the probability that at least one of the i lineages existing at time t has some descendants at time T (Nee et al. 1994):\n\nPr\n\n3(i, t; T ) = Pr2(i, t)(1 − (1 − P (t, T ))i) P\n\nj=1Pr2(j, t)(1 − (1 − P (t, T ))i)\n\n= (1 − ut)ui−1t (1 − (1 − P (t, T ))i) P\n\nj=1(1 − ut)ui−1t (1 − (1 − P (t, T ))i) (2.30) This function is an ugly expression, but there is a simple underlying structure. To show this, we use the moment generating function (mgf) of random variables. To obtain our desired result, we first need the moment generating function of geometric random variables.\n\n(20)\n\nProperty 1 The moment generating function of a random variable X which is geometric dis- tributed with parameter p, is given by:\n\nmX(t) = pet\n\n1 − et(1 − p) (2.31)\n\nProof\n\nmX(t) = E\u0002eXt\u0003\n\n=\n\nX\n\nx=1\n\nextPr(X = x)\n\n=\n\nX\n\nx=1\n\n(et)xp(1 − p)x−1\n\n= etp\n\nX\n\nx=1\n\n(et)x−1(1 − p)x−1\n\n= etp\n\nX\n\nx=1\n\n((1 − p)et)x−1\n\n= etp\n\nX\n\ny=0\n\n((1 − p)et)y\n\n= et p\n\n1 − (1 − p)et As desired.\n\nUsing the properties of the moment generating function, we observe the following for the prob- ability density given for the third process:\n\nProperty 2 The moment generating function of the number of lineages i in the third process, equals the moment generating function of the sum of two independent geometric variables G1 and G2, where\n\nG1∼ geo(ut)\n\n(G2+ 1) ∼ geo(ut(1 − P (t, T )) That is, that the pdf of G2 is given by:\n\nPr(G2= i) = (1 − ut(1 − P (t, T ))(ut(1 − P (t, T )))i, i ≥ 0 Proof We begin the proof with the moment generating function of G2:\n\nmG2(t) =\n\nX\n\ni=0\n\neitPr(G2= i) (2.32)\n\n=\n\nX\n\ni=0\n\nsi(1 − ut(1 − P (t, T ))(ut(1 − P (t, T ))i (2.33)\n\n= (1 − ut(1 − P (t, T ))\n\nX\n\ni=0\n\n(ut(1 − P (t, T )s))i (2.34)\n\n= 1 − ut(1 − P (t, T ))\n\n1 − ut(1 − P (t, T )s (2.35)\n\n(21)\n\nThe moment generating of the the third process is given by:\n\nmi(t) =\n\nX\n\ni=1\n\neitPr\n\n3(i, t; T )\n\n=\n\nX\n\ni=1\n\nsi (1 − ut)ui−1t (1 − (1 − P (t, T ))i) P\n\nj=1(1 − ut)ui−1t (1 − (1 − P (t, T ))i)\n\n= s(1 − ut)\n\nP\n\nj=1(1 − ut)ui−1t (1 − (1 − P (t, T ))i)\n\nX\n\ni=1\n\n(sut)i−1(1 − (1 − P (t, T ))i)\n\n= s(1 − ut) κ\n\n\" X\n\ni=1\n\n(sut)i−1\n\nX\n\ni=1\n\n(sut)i−1(1 − P (t, T ))i\n\n#\n\n= s(1 − ut) κ\n\n\"\n\n1\n\n1 − sut− (1 − P (t, T )\n\nX\n\ni=1\n\n(sut(1 − P (t, T )))i−1\n\n#\n\n= s(1 − ut) κ\n\n\u0014 1\n\n1 − sut\n\n− (1 − P (t, T ) sut(1 − P (t, T ))\n\n\u0015\n\n= s(1 − ut) κ\n\n\u0014 P (t, T )\n\n(1 − sut)(1 − (1 − P (t, T ))sut)\n\n\u0015\n\n=\u0014 s(1 − ut) 1 − uts\n\n\u0015 \u0014 1 κ\n\nP (t, T ) 1 − ut(1 − P (t, T ))s\n\n\u0015\n\nWhere:\n\nκ =\n\nX\n\nj=1\n\n(1 − ut)uj−1t [1 − (1 − P (t, T ))i]\n\n=\n\nX\n\nj=1\n\n(1 − ut)uj−1t\n\nX\n\nj=1\n\n(1 − ut)uj−1t [1 − (1 − P (t, T ))i]\n\n= 1 − (1 − ut)(1 − P (t, T ))\n\nX\n\nj=1\n\nuj−1t (1 − P (t, T ))i−1\n\n= 1 − (1 − ut)(1 − P (t, T )) 1\n\n1 − ut(1 − P (t, T ))\n\n= P (t, T ) 1 − ut(1 − P (t, T )) In that case:\n\nmi(t) =\u0014 s(1 − ut) 1 − uts\n\n\u0015 \u0014 1 − ut(1 − P (t, T )) 1 − ut(1 − P (t, T ))s\n\n\u0015\n\n(2.36) We recognize that the moment generating function is the product of two moment generating functions of variables G1and G2. The first term is the moment generating function of G1∼ geo(ut) and the second term is the moment generating function of (G2+ 1) ∼ geo(ut(1 − P (t, T )).\n\nThus, the number of lineages existing at time t for a birth-death process which will survive to a time T later can be treated as the sum of two independent variables, with a geometric distribution.\n\nNow we will see that process 4, which is reconstructed from this one, is again geometric distributed for Pr(i, t).\n\nLet zi be the probabilities of the third process, given by equation 2.30. Of the j lineages existing at time t, i will have some descendants at the time T . Here, i is binomial distributed with parameter P (t, T ) and n > 0, since at least one survives to T . So, we obtain for the reconstructed process:\n\n(22)\n\nPr4(i, t; T ) =\n\nX\n\nj=1\n\nzj\n\n1 − (1 − P (t, T ))j\n\n\u0012j i\n\n\u0013\n\nP (t, T )i(1 − P (t, T ))j−i (2.37)\n\nWhich simplifies to (Nee et al. 1994):\n\nPr4(i, t; T ) =\n\n\u0012\n\n1 − utP (0, T ) P (0, t)\n\n\u0013 \u0012\n\nutP (0, T ) P (0, t)\n\n\u0013i−1\n\n, i > 0 (2.38)\n\nWhich is a geometric distribution with parameter utP (0,T )\n\nP (0,t). Therefore, the first, second and fourth process have a geometric distributed number of lineages, the third process’ number of lineages is distributed as the sum of two geometric variables.\n\n### 2.3.2 Probability Density Function of a Simulated Tree\n\nWe now look more closely to a birth-death process which generated the distribution given in equation 2.38. We let n(t) be the number of lineages at time t. We suppose that we grow a reconstructed phylogenetic tree, starting at time t = 0, thus n(0) = 1. Each lineage a time t give rise to a daughter lineage at a rate λ Pr(t, T ). So, after a small amount of time dt, we have:\n\nn(t) =\n\n\u001a n(t) + 1 with probability n(t)λP (t, T )dt\n\nn(t) with probability 1 − n(t)λP (t, T )dt (2.39) We extend the probability model given in equation 2.39 by the following. Given that we have n lineages at a time tn, we denote the time untill the next lineage as τ . We then have:\n\nPr(τ > t + dt; 0 < t < T − tn) = Pr(τ > t; 0 < t < T − tn) Pr(no lineage in dt)\n\n= Pr(τ > t; 0 < t < T − tn)(1 − λn(t)P (t, T )dt) (2.40) Which is equivalent to:\n\nd\n\ndtPr(τ > t; 0 < t < T − tn) = −λn(t)P (t, T ) Pr(τ > t; 0 < t < T − tn) (2.41) Since we know the number of lineages n(t) at time t, we can treat it as a fixed variable. The solution of this ODE is then straight forward:\n\nPr(τ > t; 0 < t < T − tn) = exp\n\n\u0012\n\n−λn Z tn+t\n\ntn\n\nP (s, T )ds\n\n\u0013\n\n(2.42) Where the last integral needs some computations to get the solution:\n\n(23)\n\nZ tn+t tn\n\nP (s, T )ds = Z tn+t\n\ntn\n\n1 −µλ\n\n1 − µλe−(λ−µ)(T −s)ds\n\n= Z tn+t\n\ntn\n\nλ − µ\n\nλ − µe−(λ−µ)(T −s)ds substitute u(t) = (λ − µ)t, to obtain:\n\n=\n\nZ u(tn+t) u(tn)\n\n1\n\nλ − µeu−(λ−µ)Tdu\n\n=\n\nZ u(tn+t) u(tn)\n\n1 λ − κeudu\n\nsubstitute v(u) = λ − κeu to obtain\n\n=\n\nZ v(u(tn+t)) v(u(tn))\n\n1 w\n\n1 w − λdw\n\n=\n\nZ v(u(tn+t)) v(u(tn))\n\n\u0012\n\n− 1\n\nλw + 1\n\nλ(w − λ)\n\n\u0013 dv\n\n= −1\n\nλlog(w) + 1\n\nλlog(λ(w − λ))|v(u(tv(u(tn+t))\n\nn))\n\n= t −µ λt − 1\n\nλlog\u0012 1 −µλexp(−(λ − µ)(T − tn− t)) 1 −µλexp(−(λ − µ)(T − tn))\n\n\u0013\n\n(2.43) Which yields us the solution of the ODE:\n\nPr(τ > t; 0 < t < T − tn) = exp(−n(λ − µ)t)\u0012 1 −µλexp(−(λ − µ)(T − tn− t)) 1 − µλexp(−(λ − µ)(T − tn))\n\n\u0013n\n\n(2.44) From the last equation 2.45, we can obtain the probability density function of the waiting time for a birth, t (Nee et al. 1994):\n\nPr(t; tn, T, λ, µ) = n(λ − µ)e−n(λ−µ)t 1 − µλe−(λ−µ)(T −tn−t)\u0001n−1\n\n1 −µλe−(λ−µ)(T −tn)\u0001n (2.45) From this probability density function in equation 2.45, we obtain the likelihood of a tree which has grown according to a birth-death process. Let xi denote the time between the i-th birth event and the present day T . Then, the time between two lineages is ti = xi− xi+1 and xi = T − tn. Suppose we have a tree which has N lineages at the present day. Let the vector x contain all the times between lineages, which starts from the time that the first species gave birth to two new species. Therefore, x = (x2, x3, . . . , xN). We obtain the likelihood of the tree by multiplying all independent probabilities of the times between lineages:\n\nL(x|λ, µ) =\n\nN −1\n\nY\n\ni=1\n\nPr(ti; tn, T, λ, µ)\n\n=\n\nN −1\n\nY\n\ni=1\n\nn(λ − µ)e−n(λ−µ)(xi−xi+1) 1 −µλe−(λ−µ)xi+1\u0001n−1\n\n1 −µλe−(λ−µ)xi\u0001n (2.46)\n\n### 2.3.3 Maximum Likelihood Estimation\n\nTo find the maximum likelihood estimator of the parameters of the birth-death process, we need the likelihood obtain in equation 2.46. Instead of maximizing the likelihood, we maximize the log-likelihood:\n\n(24)\n\n`(x|λ, µ) = log\n\nN −1\n\nY\n\ni=1\n\nn(λ − µ)e−n(λ−µ)(xi−xi+1) 1 −µλe−(λ−µ)xi+1\u0001n−1 1 −µλe−(λ−µ)xi\u0001n\n\n!\n\n=\n\nN −1\n\nX\n\ni=2\n\n{log(n) + log(λ − µ) − n(λ − µ)(xi− xi+1)}\n\n+\n\nN −1\n\nX\n\ni=2\n\n\u001a\n\n(n − 1) log(1 − λ\n\nµe−(λ−µ)xi+1) − n log(1 −λ\n\nµe−(λ−µ)xi)\n\n\u001b\n\n(2.47)\n\nThe normal approach, i.e. taking partial derrivatives and setting equal to zero is not applicable in this case, since this equation can’t be solved analytically. Therefore, we obtain maximum likelihood estimates of the parameters λ and µ numerically. This will be done in chapter 3.\n\n### 2.4 Approximating the Likelihood of the Protracted Birth- Death Model\n\nInstead of finding the likelihood of a tree which has grown according to a protracted birth-death model, we make an approximation using Gaussian processes. A protracted birth-death tree which has grown according to a Gaussian process is not a useful tree for the estimation, since the tree its lineages might not be a natural number in this process. However, treating a tree which has grown according to a protracted birth-death process, can be inferred by an approximation with a birth-death process. We first give the definition of a Gaussian process:\n\nDefinition A real-valued stochastic process, {Xt; t ∈ T }, where T is an index set, is a Gaussian process if all the finite-dimensional distributions have a multivariate normal distribution. That is, for any choice of distinct values t1, . . . , tk ∈ T , the random vector X = (Xt1, . . . , Xtk)0 has a multivariate normal distribution with mean µ = E[X] and covariance matrix Σ = cov(X, X), which has the probability density function:\n\nfX(X; µ, Σ) = 1 (2π)k2|Σ|12\n\nexp\n\n\u0014\n\n−1\n\n2(X − µ)0Σ−1(X − µ)\n\n\u0015\n\n(2.48) Which is denoted by:\n\nX ∼ N (µ, Σ) (2.49)\n\nThe mean and covariance of a Gaussian process are defined by:\n\nµ(t) = E[Xt], γ(s, t) = cov(Xs, Xt), t, s ∈ T (2.50)\n\n### 2.4.1 Approximation of the Process\n\nWe let Ntdenote the state at time t, which consists of the number of good species Ng(t) and the number of incipient species Ni(t) at time t. Suppose that we take a step in time dt, such that the next event occurs. For the protracted birth-death process, five different event may occur:\n\n1. A good species giving birth to an incipient species, with a rate λ1Ng(t)dt.\n\n2. A good species going extinct, with a rate µNg(t)dt.\n\n3. An incipient species reaching completion, with a rate λ2Ni(t)dt\n\n4. An incipient species giving birth to a new incipient species, with a rate λ3Ni(t)dt 5. An incipient species going extinct, with a rate µNi(t)dt.\n\n(25)\n\nThe Mean and Covariance of the Process\n\nWe look to the difference between the state of today Nt= [Ng(t), Ni(t)] and the state of the next event Nt+dt, which is denoted by ∆Nt= Nt+dt− Nt. To avoid confusion with the mean µ and the death rate µ, we let the vector Λ = (λ1, λ2, λ3, µ) denote the parameters. Doing so, we have the following expectation:\n\nE[∆Nt|Nt] =\n\n\u0014 −µNg(t)dt + λ2Ni(t)dt\n\n−µNi(t)dt − λ2Ni(t)dt + λ1Ng(t)dt + λ3Ni(t)dt\n\n\u0015\n\n=\n\n\u0014 −µNg(t) + λ2Ni(t) +λ1Ng(t) − (λ2+ λ3+ µ)Ni(t)\n\n\u0015 dt\n\n= µ {Nt, Λ} dt (2.51)\n\nWe again look to the difference difference between the state of today ∆Nt. As we have earlier defined the expectation of the development in the protracted birth-death process, given the last state of the process, we do exactly the same for the conditional covariance. In a similiar way as for the expectation, we obtain the following conditional covariance:\n\ncov(∆Nt|Nt) =\n\n\u0014 Var (∆Ng(t + dt)|Nt) cov(∆Ng(t + dt), ∆Ni(, t + dt)|Nt) cov(∆Ng(t + dt), ∆Ni(, t + dt)|Nt) Var (∆Ni(t + dt)|Nt)\n\n\u0015 (2.52) We obtain the entries of the matrix part by part. We start with Σ1,1 :\n\nVar (∆Ng(t + dt)|Nt) = E[(∆Ng(t + dt))2|Nt] − (E[∆Ng(t + dt)|Nt])2\n\n= µNg(t)dt + λ2Ni(t)dt − (µNg(t)dt + λ2Ni(t)dt)2\n\n= µNg(t)dt + λ2Ni(t)dt − (µNg(t) + λ2Ni(t))2dt2\n\n∝ µNg(t)dt + λ2Ni(t)dt (2.53)\n\nΣ1,2 = Σ2,1 is given by:\n\ncov(∆Ng(t + dt), ∆Ni(t + dt)|Nt) = E[∆Ng(t + dt) · ∆Ni(t + dt)|Nt]\n\n− E[∆Ng(t + dt)|Nt] · E[∆Ni(t + dt)|Nt]\n\n= −λ2Ni(t)dt − (−µNg(t)dt + λ2Ni(t)dt)\n\n× (−µNi(t)dt − λ2Ni(t)dt + λ1Ng(t)dt + λ3Ni(t)dt)\n\n= −λ2Ni(t)dt − (−µNg(t) + λ2Ni(t))\n\n× (−µNi(t) − λ2Ni(t) + λ1Ng(t) + λ3Ni(t))dt2\n\n∝ −λ2Ni(t)dt (2.54)\n\nΣ2,2 is given by:\n\nVar (∆Ni(t + dt)|Nt) = E[(∆Ni(t + dt))2|Nt] − (E[∆Ni(t + dt)|Nt])2\n\n= λ1Ng(t)dt + (λ2+ λ3+ µ)Ni(t)dt\n\n− (−µNi(t)dt − λ2Ni(t)dt + λ1Ng(t)dt + λ3Ni(t)dt)2\n\n= λ1Ng(t)dt + (λ2+ λ3+ µ)Ni(t)dt\n\n− (−µNi(t) − λ2Ni(t) + λ1Ng(t) + λ3Ni(t))2dt2\n\n∝ λ1Ng(t)dt + (λ2+ λ3+ µ)Ni(t)dt (2.55) We therefore obtain the covariance matrix Σ as:\n\n(26)\n\ncov(∆Nt|Nt) =\u0014µNg(t) + λ2Ni(t) −λ2Ni(t)\n\n−λ2Ni(t) λ1Ng(t) + (λ2+ λ3+ µ)Ni(t)\n\n\u0015\n\ndt = Σ{Nt, Λ} · dt (2.56) If we assume that the number good species and incipient species is nonnegative, the matrix Σ{Nt, Λ} is nonsingular in most of the situations because the determinant is nonzero. Observe that the determinant of Σ is given by the following expression:\n\ndet (Σ) =\n\nµNg(t) + λ2Ni(t) −λ2Ni(t)\n\n−λ2Ni(t) λ1Ng(t) + (λ2+ λ3+ µ)Ni(t)\n\n= µNg(t) (λ1Ng(t) + [λ2+ λ3+ µ]Ni(t)) + λ2Ni(t) (λ1Ng(t) + (λ3+ µ)Ni(t)) (2.57) The determinant is nonzero, except if:\n\n1. All species are extinct.\n\n2. Only one of the parameters is nonzero.\n\n3. All good species have become extinct, and λ2= 0 or λ3= µ = 0.\n\n4. There are no incipient species left, µ = 0 or λ1= 0.\n\nBecause the matrix Σ is in general nonsingular, we can find the square root of the matrix by diagonalization. That is:\n\nB(Xt|Λ) = V D12V−1 (2.58)\n\nWhere D is the matrix consisting of the eigenvalues of Σ, and V is the matrix consisting of the eigenvalues of Σ.\n\nDefining the New Process\n\nNow we have obtained the expressions of µ(Nt, Λ) and Σ(Nt, Λ), we can obtain a Gaussian process for our protracted birth-death model. We first need to define the concept of a Wiener Process:\n\nDefinition A statistical process {Wt; t ∈ T } in continuous time is called a Wienerprocess or Brow- nian Motion if it has the following properties:\n\n1. W0= 0\n\n2. The function t → Wt is everywhere continuous almost surely.\n\n3. Wt has independent increments with Wt− Ws∼ N (0, I(t − s)) for 0 ≤ s < t.\n\nWe define the new process Ytby means of a Wiener Process {Wt; t ∈ T }:\n\ndYt= µ(Yt; Λ)dt + B(Yt; Λ)dWt (2.59) When we look more closely to equation 2.59, we see that the conditional expectation and the conditional variance of dYtgiven Ytare the same for the protracted birth-death process. Therefore, we conclude that this newly defined process is an approximation of the original protracted birth- death process. However, notice that is it not exactly the same process. We can even see that this construction of the approximation yields us a Gaussian process:\n\nE[dYt|Yt] = µ(Yt; Λ)dt\n\ncov[dYt|Yt] = B(Yt; Λ)dtB(Yt; Λ)0= Σ · dt dYt∼ N (µdt, Σdt)\n\nTherefore, the process given by Yk+1= Yk+ dYk is a Gaussian process. This means that we can approximate the protracted birth-death model, which is originally a Poisson process, by means of a Gaussian process, given by {Yt, t ∈ T }.\n\n(27)\n\n### 2.4.2 Approximation of the likelihood\n\nSuppose that we have a tree which has grown according to a protracted birth-death process. For this model, we assume that we use actual phylogenies for inferences, instead of the reconstructed phylogenies. Doing so, we can list the number of good species and incipient species on a time t in a vector, called Nt. At a time t + ∆t later, the next event occurs in our protracted birth-death process. We are interested in the difference between Ntand Nt+∆t:\n\n∆Nt= Nt+∆t− Nt (2.60)\n\nThis difference can be seen as a development in the process. Normally, the difference is obvi- ously a vector containing whole numbers. We can view these differences however, as an outcome in a Gaussian process. Intuitively, the set {∆Nt|t ∈ T } can be treated a Gaussian process. Fur- thermore, by the construction of our approximation dYt, we can even claim that all these variables are indepent multivariate normal random variables. This is the case, because the approximation dYt is constructed by means of a Wienerprocess which has independent increments.\n\nSince the likelihood of a development ∆Ntin the protracted process is now approximated by the probability density of the multivariate normal distribution, we have that for each event time t ∈ T :\n\nf∆Nt(∆Nt; µt, Σt) ≈ 1 (2π)k2t|12\n\nexp\n\n\u0014\n\n−1\n\n2(∆Nt− µt)0Σ−1t (∆Nt− µt)\n\n\u0015\n\n= dmnorm(∆Nt; Σt, µt) (2.61)\n\nBy the independent increments of the Wienerprocess, we therefore obtain the following loglike- lihood of the actual phylogeny:\n\n`({Nt, t ∈ T }; Λ) ≈X\n\nt∈T\n\nlog (dmnorm(∆Nt; Σt, µt)) (2.62)\n\nThismaximum likelihood estimate can’t be solved analytically for Λ = (λ1, λ2, λ3, µ). The inferences will therefore be computed numerically in chapter 3.\n\n(28)\n\n### Simulation studies\n\nIn this chapter, we investigate the simulations of the constant birth-death model and the protracted birth-death model. We first review and proof basic properties of the models.\n\n### 3.1 Simulation of a Birth-Death Model\n\nThe simulation of the birth-death model has been done numerically in R. In order to do so, we use a recursive algorithm. First, we give an outline of important analytical properties of the algorithm.\n\n### 3.1.1 Basic Properties\n\nThe first event\n\nIn a birth-death model, we suppose that speciation and extinction are processes that proceed simulataneously. The process that finished first, is the process that we observe. Let Tb(1) denote the time that speciation of the first species in a phylogeny would occur, and Td(1) the time that extinction of this species would occure. Furthermore, we assume that Tb(1) ∼ exp(λ) and Td(1) ∼ exp(µ) are independent random variables. We now consider the time that the first of these two events occurs: T (1) = min(Tb(1), Td(1)). First, we state an useful theorem.\n\nProperty 3 Suppose X1, . . . , Xn are independent exponential random variables with parameters λ1, . . . , λn. Then the minimum of the random variables, X = min{X1, . . . , Xn}, is exponentially distributed with parameter λ =Pn\n\ni=1λi.\n\nProof We make use of unicity of the surivival function for random variables. Recall that the survival function of an exponentially distributed random variable Y with parameter θ, is defined by:\n\nS(y; θ) = Pr(Y ≥ y) = exp(−θy) (3.1)\n\nNow we are going to make use of the fact, that for a certain a > 0: X = min{X1, . . . , Xn} > a if and only if Xi > a for all i = 1 . . . n. By independency of the random variables, we obtain the survival function of X:\n\nPr(X = min{X1, . . . , Xn} > a) =\n\nn\n\nY\n\ni=1\n\nPr(Xi > a)\n\n=\n\nn\n\nY\n\ni=1\n\nexp(−aλi)\n\n= exp(−a\n\nn\n\nX\n\ni=1\n\nλi)\n\nSo by the unicity of the survival function, X ∼ exp(Pn i=1λi).\n\nReferenties\n\nGERELATEERDE DOCUMENTEN\n\nEconomische Zaken presenteerden in november 2014, had als ondertitel: ‘Keuzes voor de toekomst’. Het visiedocument kiest voor excellente wetenschap en onderzoek dat de\n\nOur main results are representations for the decay parameter under four different scenarios, derived from a unified perspective involving the orthogonal polynomials appearing in\n\nFor estimating the parameters and calculating the values of the log-likelihoods of the unrestricted and restricted Lee-Carter models, we considered the pop- ulations of the\n\nA wider VFoV combined with a longer scanning range, high output rate of points and better range accuracy will always be of significant importance when selecting the most suitable\n\nToday, people demand up‐to‐date, or (near‐) real-time information, about virtually anything, anywhere, and\n\nIs het een engel met een bazuin (beschermer van de kerk of bedoeld voor het kerkhof ), een mannelijke of vrouwelijke schutheilige met een attribuut, of een begijn (wat op die\n\nThe study has aimed to fill a gap in the current literature on the relationship between South Africa and the PRC by looking at it as a continuum and using asymmetry\n\n(*): To what extent were the firm’s marketing research resources, people, knowledge and skills adequate for the gathering of market information needed for this application.. O\n\n derive a minimum distance estimator based on the likelihood of the first-di fference transform of a dynamic panel data model utilizing assumptions regarding the\n\nA Monte Carlo comparison with the HLIM, HFUL and SJEF estimators shows that the BLIM estimator gives the smallest median bias only in case of small number of instruments\n\nIn this paper we propose a method for forecasting emerging technologies in their take-off phase by identifying mature technologies with a similar behavior in that phase,\n\nChanges in the extent of recorded crime can therefore also be the result of changes in the population's willingness to report crime, in the policy of the police towards\n\n(nr. 1, 2 en 3) maar niet bij de bedrijven met recirculatie bij paprika (nr. 23 t/m 27), waar door het overlopen van de goten aan- zienlijke hoeveelheden water en kunstmest\n\nSelecta Klemm rood West-Stek geel West-Stek roze Van Staaveren roze Selecta Klemm roze Van Staaveren lila West-Stek roze Van Staaveren geel Vergelijkingsras roze... Kooij\n\nOmdat de concentratie fosfor in dit water (veel) groter kan zijn dan in deze studie opgelegd, zijn de berekende concentraties fosfor in het uittredend grondwater aan de lage kant\n\nWhile it is assumed that the code is outdated, it does not mean that its objectives are different from the objectives of the HR department to „construct a\n\nTo assess the effect of culture on extreme response style and midpoint response style, four of Hofstede’s cultural dimensions are examined: individualism,\n\nThis is partly connected to the im- pression management theory, whose final aim is to control and influence the perceptions other people have about the giver of the\n\nTegelijkertijd daalt de vaccinatiegraad zozeer, dat sommige landen voor specifieke infecties, zoals bij- voorbeeld mazelen, het vaccineren van\n\nThese peculiar properties can be entirely attributed to uncorrected contamination by the galaxy’s spheroidal component, and when this element is properly modelled, the galaxy\n\n• Instead of counting how “close” the verifying category is to the one with the highest probability, we should count how often we observe the category with the highest probability,"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8650027,"math_prob":0.99345255,"size":49555,"snap":"2023-40-2023-50","text_gpt3_token_len":14940,"char_repetition_ratio":0.16399266,"word_repetition_ratio":0.11671233,"special_character_ratio":0.294158,"punctuation_ratio":0.12761799,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99818575,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T00:40:40Z\",\"WARC-Record-ID\":\"<urn:uuid:59982faa-6396-4234-9443-e5b2931d7a4a>\",\"Content-Length\":\"253197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f420c5f4-85c5-4a70-b479-567308d5fcae>\",\"WARC-Concurrent-To\":\"<urn:uuid:7259d21c-6722-47c5-98c8-15d1e5abd1a6>\",\"WARC-IP-Address\":\"142.93.139.232\",\"WARC-Target-URI\":\"https://5dok.net/document/yr38rpko-evolution-how-long-does-speciation-take.html\",\"WARC-Payload-Digest\":\"sha1:PNP6DIB4H4N55K446VL6ATB6DDABHY22\",\"WARC-Block-Digest\":\"sha1:2MFVBNN6V3GGL3WIXP4OZ6JAFS7V3H25\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510942.97_warc_CC-MAIN-20231002001302-20231002031302-00236.warc.gz\"}"} |
https://open.library.ubc.ca/soa/cIRcle/collections/ubctheses/24/items/1.0074375 | [
"# Open Collections\n\n## UBC Theses and Dissertations",
null,
"## UBC Theses and Dissertations\n\n### CQ algorithms : theory, computations and nonconvex extensions Guo, Yipin\n\n#### Abstract\n\nThe split feasibility problem (SFP) is important due to its occurrence in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy. Mathematically, it can be formulated as finding a point x∗ such that x∗ ∈ C and Ax∗ ∈ Q, where A is a bounded linear operator, C and Q are subsets of two Hilbert spaces H₁ and H₂ respectively. One particular algorithm for solving this problem is the CQ algorithm. In this thesis, previous work on CQ algorithm is presented and a new proof of convergence of the relaxed CQ algorithm is given. The CQ algorithm is shown to be a special case of the subgradient projection algorithm. The SFP is extended into two nonconvex cases. The first one is on S-subdifferentiable functions, and the other one is on prox-regular functions. The subgradient projection algorithm and CQ algorithm are proved to converge to a solution of the first and second case respectively."
] | [
null,
"https://open.library.ubc.ca/staticfile/img/featured/icon-ubctheses.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88449293,"math_prob":0.9702414,"size":1167,"snap":"2022-40-2023-06","text_gpt3_token_len":277,"char_repetition_ratio":0.14015478,"word_repetition_ratio":0.0,"special_character_ratio":0.18251929,"punctuation_ratio":0.08737864,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97192556,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-01T22:09:59Z\",\"WARC-Record-ID\":\"<urn:uuid:2f3951ba-6c56-40b1-a622-dab80731b916>\",\"Content-Length\":\"106337\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:efe04069-a02b-4c4a-ab6e-bd36955fe29c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0db8609c-32dc-4e54-a1ee-4a60b23a90d3>\",\"WARC-IP-Address\":\"142.103.96.89\",\"WARC-Target-URI\":\"https://open.library.ubc.ca/soa/cIRcle/collections/ubctheses/24/items/1.0074375\",\"WARC-Payload-Digest\":\"sha1:IOWFJE4MLTGPMQL5IQNMKL5RRUHPAETN\",\"WARC-Block-Digest\":\"sha1:ERA4ANJUPTDRWW3T25C7YQXULNOW26YJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030336921.76_warc_CC-MAIN-20221001195125-20221001225125-00609.warc.gz\"}"} |
http://15462.courses.cs.cmu.edu/spring2022/newsfeed | [
"What's Going On",
null,
"How exactly does the triangle mesh become a grid again?",
null,
"bt1 commented on slide_022 of Variance Reduction ()\n\n@pbhuang In this case, I think \"pathological\" just means a worst-case scenario for rendering - naive path tracing works really badly with a small directional light source + near-perfect mirror, since the probability of tracing back to the light source is small",
null,
"pbhuang commented on slide_022 of Variance Reduction ()\n\nWhat does \"pathological\" mean in this context?",
null,
"nsp commented on slide_010 of Dynamics and Time Integration ()\n\nForce should really be mg for this slide, meaning that acceleration would just be g, and we would not need mass in any of these equations. Motion of an object in flight should not depend on its mass.",
null,
"nsp commented on slide_032 of Introduction to Animation ()\n\nLet me add a comment here -- in practice, it is easiest to define our polynomials p(u) for 0 <= u <= 1. What that means for this slide is that we can keep this notion of knots occurring at different times. However, it might be easier to reparameterize our f(t) where t_i <= t <= t_{i+1} in terms of parameter u, such that 0 <= u <= 1. This is pretty simple -- u is just (t-t_i) / (t_{i+1} - t_i). This reparameterization makes our polynomials easier to work with, derive, and think about. We will make the assumption of this parameterization in the class.",
null,
"I'm a bit confused with this slide here. For the premultiplied (from the previous slide), isn't $$B' = \\alpha_B B$$? So the non-premultiplied equation = premultiplied equation?",
null,
"coconut commented on slide_023 of Introduction to Geometry ()\n\nCould you elaborate more on why it would be harder to sample an implicit surface like the one above?",
null,
"Question! (i,j) looks like it's in the middle of the pixel ($$f_{00}$$) but i and j are the result of a floor so should be an int (and so shouldn't be in the middle of a pixel), - is the diagram not quite right? Or am I missing something here?",
null,
"AugustSZ commented on slide_038 of 3D Rotations ()\n\nDon't understand what's q bar",
null,
"pbhuang commented on slide_050 of Math Review Part I (Linear Algebra) ()\n\nA and C",
null,
"pbhuang commented on slide_029 of Math Review Part I (Linear Algebra) ()\n\nYes it does",
null,
"AugustSZ commented on slide_010 of Math Review Part II (Vector Calculus) ()\n\n<u,v>=<v,u>",
null,
"AugustSZ commented on slide_034 of Math Review Part I (Linear Algebra) ()\n\nit's symmetric",
null,
"AugustSZ commented on slide_030 of Math Review Part I (Linear Algebra) ()\n\nyes again",
null,
"AugustSZ commented on slide_029 of Math Review Part I (Linear Algebra) ()\n\nYes. It satisfies the 4 properties of a norm on Page 28",
null,
"nsp commented on slide_038 of Introduction ()\n\nBefore I replaced these slides with a few fixes, AugustSZ commented — There're 2 kinds of projections mentioned in Fundamentals of Computer Graphics: Orthographic Projection and Perspective projection. Camera Definition: position, orientation(up/look-at)\n\nI replied — Yes, that's right. We'll have a closer look when we get to the Perspective Projection lecture on Feb 9th. We've made an assumption here that we have Perspective projection, with camera position at c (really, c=(0,0,0) given the math on the slide), up vector is y, and look-at vector is z. If we want a general camera, we'll need to loosen those restrictions later to allow arbitrary camera positions and orientations. However, you can always fall back on the rule of similar triangles. They may be oriented oddly in 3D space to begin with, but we'll see how to transform them to look like the picture above.\n\nOrthographic projection is very simple. For orthographic projection in the picture above, v=y and u=x. Nothing more to do."
] | [
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/ohno.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/bt1.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/pbhuang.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/nsp.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/nsp.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/hannahhe.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/coconut.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/hannahhe.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/AugustSZ.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/pbhuang.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/pbhuang.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/AugustSZ.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/AugustSZ.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/AugustSZ.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/AugustSZ.jpg",
null,
"http://15462.courses.cs.cmu.edu/spring2022content/profile_pictures/nsp.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93611634,"math_prob":0.80671537,"size":2785,"snap":"2022-27-2022-33","text_gpt3_token_len":662,"char_repetition_ratio":0.094210714,"word_repetition_ratio":0.0041067763,"special_character_ratio":0.24021544,"punctuation_ratio":0.10326087,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.992689,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,1,null,1,null,3,null,3,null,3,null,2,null,1,null,2,null,5,null,3,null,3,null,5,null,5,null,5,null,5,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T00:27:24Z\",\"WARC-Record-ID\":\"<urn:uuid:863eac66-7641-4081-8c5b-46494789fa5a>\",\"Content-Length\":\"18374\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1905d5f0-9588-4d9d-98bf-4961dd9fbfd0>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a4b9b0c-b184-4190-9957-253a632f27bf>\",\"WARC-IP-Address\":\"128.2.220.105\",\"WARC-Target-URI\":\"http://15462.courses.cs.cmu.edu/spring2022/newsfeed\",\"WARC-Payload-Digest\":\"sha1:SBL4USOKCOMPSRWXKVBWCE22M5EI52TW\",\"WARC-Block-Digest\":\"sha1:WCK7JMNZC3MWRDMRQTCUVDT32BG4J52Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572215.27_warc_CC-MAIN-20220815235954-20220816025954-00036.warc.gz\"}"} |
https://codinginterviewpro.com/interview-questions-and-answers/all-new-60-oops-interview-questions/ | [
"# All New 60 OOPs Interview Questions\n\n## Introduction\n\nWelcome to the world of Object-Oriented Programming (OOP)! In this interview, we will explore some fundamental concepts and principles of OOP, which is a programming paradigm widely used in modern software development. OOP aims to structure code in a way that mirrors the real world, making it easier to design, understand, and maintain software systems.\n\nRemember that understanding OOP goes beyond memorizing syntax; it’s about grasping the underlying principles and applying them to solve real-world problems. So, let’s embark on this learning journey together, and I’m excited to see your knowledge and passion for Object-Oriented Programming in action! Good luck!\n\n## Basic Questions\n\n### 1. What is Object-Oriented Programming (OOP)?\n\nObject-Oriented Programming (OOP) is a programming paradigm that organizes data and behaviors into objects. It focuses on modeling real-world entities as objects, encapsulating data and methods within them. OOP promotes code reusability, maintainability, and modularity.\n\n### 2. What are the main principles of OOP?\n\nThe main principles of OOP are:\n\n1. Encapsulation: Bundling data and methods that operate on the data within a single unit (object).\n2. Abstraction: Hiding unnecessary implementation details and exposing only relevant features of an object.\n3. Inheritance: Creating new classes (subclasses) based on existing classes (superclasses) to inherit their attributes and behaviors.\n4. Polymorphism: The ability of different objects to be treated as instances of their common superclass, allowing methods to be called on them without knowing their specific types.\n\n### 3. What is a class in OOP?\n\nA class in OOP is a blueprint that defines the structure and behavior of objects. It acts as a template for creating objects with specific attributes and methods.\n\nPython\n``````class Car:\ndef __init__(self, make, model):\nself.make = make\nself.model = model\n\ndef start_engine(self):\nreturn f\"{self.make} {self.model}'s engine started.\"``````\n\n### 4. What is an object in OOP?\n\nAn object in OOP is an instance of a class. It represents a real-world entity or concept and contains data and methods defined in the class.\n\nPython\n``````# Creating objects (instances) of the Car class\ncar1 = Car(\"Toyota\", \"Corolla\")\ncar2 = Car(\"Honda\", \"Civic\")\n\n# Accessing object's attributes and methods\nprint(car1.make) # Output: Toyota\nprint(car2.start_engine()) # Output: Honda Civic's engine started.``````\n\n### 5. What is inheritance in OOP?\n\nInheritance in OOP allows a new class (subclass) to inherit attributes and methods from an existing class (superclass). It promotes code reuse and supports the “is-a” relationship.\n\nPython\n``````# Base class (Superclass)\nclass Animal:\ndef speak(self):\npass\n\n# Derived class (Subclass)\nclass Dog(Animal):\ndef speak(self):\nreturn \"Woof!\"\n\nclass Cat(Animal):\ndef speak(self):\nreturn \"Meow!\"\n\ndog = Dog()\ncat = Cat()\nprint(dog.speak()) # Output: Woof!\nprint(cat.speak()) # Output: Meow!``````\n\n### 6. What is polymorphism in OOP?\n\nPolymorphism allows objects of different classes to be treated as instances of their common superclass. It enables the same method to be called on different objects, producing different results based on the actual object’s type.\n\nPython\n``````# Using the Animal superclass from the previous example\ndef animal_sound(animal):\nreturn animal.speak()\n\ndog = Dog()\ncat = Cat()\n\nprint(animal_sound(dog)) # Output: Woof!\nprint(animal_sound(cat)) # Output: Meow!``````\n\n### 7. What is abstraction in OOP?\n\nAbstraction in OOP involves hiding implementation details and exposing only essential features to the outside world.\n\nPython\n``````from abc import ABC, abstractmethod\n\nclass Shape(ABC):\n@abstractmethod\ndef area(self):\npass\n\nclass Circle(Shape):\n\ndef area(self):\n\ncircle = Circle(5)\nprint(circle.area()) # Output: 78.5``````\n\n### 8. What is encapsulation in OOP?\n\nEncapsulation in OOP involves bundling data (attributes) and methods that operate on that data within a class, providing control over access to the data.\n\nPython\n``````class BankAccount:\ndef __init__(self):\nself.__balance = 0\n\ndef deposit(self, amount):\nself.__balance += amount\n\ndef withdraw(self, amount):\nif amount <= self.__balance:\nself.__balance -= amount\nelse:\nprint(\"Insufficient balance.\")\n\ndef get_balance(self):\nreturn self.__balance\n\naccount = BankAccount()\naccount.deposit(1000)\naccount.withdraw(500)\nprint(account.get_balance()) # Output: 500``````\n\n### 9. Explain the concept of a constructor.\n\nA constructor in OOP is a special method that gets automatically called when an object is created from a class. It is used to initialize the object’s attributes and perform any necessary setup.\n\nPython\n``````class Person:\ndef __init__(self, name, age):\nself.name = name\nself.age = age\n\nperson1 = Person(\"Alice\", 25)\nperson2 = Person(\"Bob\", 30)\n\nprint(person1.name, person1.age) # Output: Alice 25\nprint(person2.name, person2.age) # Output: Bob 30``````\n\n### 10. What is a destructor?\n\nA destructor in OOP is a special method that gets automatically called when an object is about to be destroyed (e.g., goes out of scope or is explicitly deleted). In Python, the `__del__()` method serves as a destructor.\n\nPython\n``````class MyClass:\ndef __init__(self, name):\nself.name = name\n\ndef __del__(self):\nprint(f\"{self.name} object is being destroyed.\")\n\nobj1 = MyClass(\"Object 1\")\nobj2 = MyClass(\"Object 2\")\n\ndel obj1 # Output: Object 1 object is being destroyed.``````\n\n### 11. What is an instance in OOP?\n\nAn instance in OOP refers to a specific object created from a class. It has its own unique data and can access the methods defined in the class.\n\nPython\n``````class Person:\ndef __init__(self, name):\nself.name = name\n\nperson1 = Person(\"Alice\")\nperson2 = Person(\"Bob\")\n\nprint(person1.name) # Output: Alice\nprint(person2.name) # Output: Bob``````\n\n### 12. What is a method in OOP?\n\nA method in OOP is a function defined within a class, which operates on the class’s data and provides behavior to objects of that class.\n\nPython\n``````class Calculator:\nreturn a + b\n\ndef subtract(self, a, b):\nreturn a - b\n\ncalc = Calculator()\nresult2 = calc.subtract(10, 4)\n\nprint(result1) # Output: 8\nprint(result2) # Output: 6``````\n\nMethod overloading in OOP allows a class to define multiple methods with the same name but different parameter lists. The correct method is chosen based on the number or types of arguments passed during the method call.\n\nPython\n``````class OverloadingExample:\nreturn a + b\n\nreturn a + b + c\n\nresult1 = overload.add(2, 3) # This will raise an error since the first 'add' method is overwritten.\n\n### 14. What does ‘method overriding’ mean?\n\nMethod overriding in OOP occurs when a subclass provides a specific implementation for a method that is already defined in its superclass. The overridden method in the subclass takes precedence over the method in the superclass.\n\n### 15. What is a superclass and a subclass?\n\n• A superclass is a class that is extended or inherited by other classes. It provides common attributes and behaviors that its subclasses can utilize.\n• A subclass is a class that inherits from a superclass. It extends the superclass by adding specific attributes and behaviors or overriding existing ones.\nPython\n``````# Superclass\nclass Animal:\ndef speak(self):\nreturn \"Generic animal sound.\"\n\n# Subclass\nclass Dog(Animal):\ndef speak(self):\nreturn \"Woof!\"\n\ndog = Dog()\nprint(dog.speak()) # Output: Woof!``````\n\n### 16. What is an interface?\n\nIn Python, there is no explicit interface keyword as in some other languages. Instead, interfaces are achieved through abstract base classes (ABCs) using the `ABC` module.\n\nPython\n``````from abc import ABC, abstractmethod\n\nclass Shape(ABC):\n@abstractmethod\ndef area(self):\npass\n\nclass Circle(Shape):\n\ndef area(self):\n\ncircle = Circle(5)\nprint(circle.area()) # Output: 78.5``````\n\n### 18. What is a static method in OOP?\n\nA static method in OOP is a method that belongs to the class rather than an instance of the class. It does not have access to instance-specific data.\n\nPython\n``````class MathOperations:\n@staticmethod\ndef multiply(a, b):\nreturn a * b\n\nresult = MathOperations.multiply(3, 4)\nprint(result) # Output: 12``````\n\n### 19. What are access modifiers in OOP?\n\nAccess modifiers in OOP define the level of visibility and access to attributes and methods within a class.\n\n• `public`: No restriction, can be accessed from anywhere.\n• `protected`: Accessible within the class and its subclasses.\n• `private`: Accessible only within the class.\nPython\n``````class MyClass:\ndef __init__(self):\nself.public_attr = \"I am public.\"\nself._protected_attr = \"I am protected.\"\nself.__private_attr = \"I am private.\"\n\nobj = MyClass()\n\nprint(obj.public_attr) # Output: I am public.\nprint(obj._protected_attr) # Output: I am protected.\nprint(obj.__private_attr) # This will raise an AttributeError.``````\n\n## Intermediate Questions\n\n### 1. What are the different types of inheritance in OOP?\n\nThere are five types of inheritance in Object-Oriented Programming:\n\n1. Single Inheritance\n2. Multiple Inheritance\n3. Multilevel Inheritance\n4. Hierarchical Inheritance\n5. Hybrid Inheritance\nPython\n``````# Example of Single Inheritance\nclass Animal:\ndef speak(self):\npass\n\nclass Dog(Animal):\ndef speak(self):\nreturn \"Woof!\"\n\n# Example of Multiple Inheritance\nclass A:\ndef method_a(self):\npass\n\nclass B:\ndef method_b(self):\npass\n\nclass C(A, B):\ndef method_c(self):\npass\n\n# Other types of inheritance (Multilevel, Hierarchical, Hybrid) can also be shown similarly.``````\n\n### 2. Explain the concept of multiple inheritances. How is it handled in languages that do not directly support it, like Java?\n\nMultiple Inheritance is the ability of a class to inherit properties and behaviors from more than one parent class. Java does not directly support multiple inheritance for classes, but it can be achieved using interfaces.\n\nJava\n``````interface A {\nvoid methodA();\n}\n\ninterface B {\nvoid methodB();\n}\n\nclass MyClass implements A, B {\n@Override\npublic void methodA() {\n// Implementation\n}\n\n@Override\npublic void methodB() {\n// Implementation\n}\n}``````\n\n### 3. What is the diamond problem in multiple inheritance and how can it be resolved?\n\nThe Diamond Problem occurs when a class inherits from two classes, both of which have a common ancestor. It leads to ambiguity in the inheritance hierarchy. In some languages, like C++, this is an issue.\n\nC++\n``````class A {\npublic:\nvoid foo() {}\n};\n\nclass B : public A {};\n\nclass C : public A {};\n\nclass D : public B, public C {};\n// Here, class D inherits from both B and C, which both inherit from A.\n\nint main() {\nD d;\nd.foo(); // Ambiguity: Which 'foo' to call, B's or C's?\n\nreturn 0;\n}``````\n\nTo resolve the Diamond Problem, virtual inheritance can be used in C++.\n\nC++\n``````class A {\npublic:\nvoid foo() {}\n};\n\nclass B : virtual public A {};\n\nclass C : virtual public A {};\n\nclass D : public B, public C {};\n// Virtual inheritance ensures only one instance of A in D.\n\nint main() {\nD d;\nd.foo(); // No ambiguity now.\n\nreturn 0;\n}``````\n\n### 4. Explain the Liskov Substitution Principle (LSP).\n\nLSP states that objects of a superclass should be replaceable with objects of its subclasses without affecting the correctness of the program.\n\n### 5. What is the Open Closed Principle (OCP)?\n\nOCP states that software entities (classes, modules, functions, etc.) should be open for extension but closed for modification. It encourages the use of abstraction and polymorphism to allow easy extension without changing existing code.\n\n### 6. What is the Single Responsibility Principle (SRP)?\n\nSRP states that a class should have only one reason to change, i.e., it should have only one responsibility.\n\n### 7. What is the Interface Segregation Principle (ISP)?\n\nISP states that a client should not be forced to implement interfaces it does not use. It promotes smaller, specific interfaces rather than large, general interfaces.\n\n### 8. What is the Dependency Inversion Principle (DIP)?\n\nDIP states that high-level modules should not depend on low-level modules, but both should depend on abstractions. It encourages the use of dependency injection.\n\n### 10. What is a virtual function?\n\nA virtual function allows dynamic binding or late binding, which enables the correct function to be called based on the object at runtime.\n\nC++\n``````class Base {\npublic:\nvirtual void show() {\ncout << \"Base class\" << endl;\n}\n};\n\nclass Derived : public Base {\npublic:\nvoid show() override {\ncout << \"Derived class\" << endl;\n}\n};\n\nint main() {\nBase* ptr;\nDerived d;\nptr = &d;\nptr->show(); // Output: \"Derived class\"\n\nreturn 0;\n}``````\n\n### 11. What are pure virtual functions?\n\nA pure virtual function is a virtual function with no implementation in the base class, making it an abstract function. Classes containing pure virtual functions are abstract and cannot be instantiated.\n\nC++\n``````class Shape {\npublic:\nvirtual void draw() = 0; // Pure virtual function\n};\n\nclass Circle : public Shape {\npublic:\nvoid draw() override {\ncout << \"Drawing a circle\" << endl;\n}\n};\n\nint main() {\nCircle c;\nc.draw(); // Output: \"Drawing a circle\"\n\n// Shape s; // Error: Cannot instantiate abstract class\n\nreturn 0;\n}``````\n\n### 13. Explain what friend classes and friend functions are.\n\nA friend class or function can access private and protected members of another class.\n\nC++\n``````class A {\nprivate:\nint x;\nfriend class B; // B is a friend class of A\n\npublic:\nA() : x(10) {}\n};\n\nclass B {\npublic:\nvoid display(A obj) {\ncout << \"Value of x in A: \" << obj.x << endl;\n}\n};\n\nint main() {\nA a;\nB b;\nb.display(a); // Output: \"Value of x in A: 10\"\n\nreturn 0;\n}``````\n\nC++\n``````class Complex {\nprivate:\ndouble real;\ndouble imag;\n\npublic:\nComplex(double r, double i) : real(r), imag(i) {}\n\nComplex operator+(const Complex& other) {\nreturn Complex(real + other.real, imag + other.imag);\n}\n};\n\nint main() {\nComplex a(2.0, 3.5);\nComplex b(1.5, 2.0);\n// c\n\n.real = 3.5, c.imag = 5.5\n\nreturn 0;\n}``````\n\n### 15. What is a copy constructor?\n\nA copy constructor creates a new object by copying the content of an existing object.\n\nC++\n``````class MyClass {\nprivate:\nint data;\n\npublic:\nMyClass(int d) : data(d) {}\n\n// Copy constructor\nMyClass(const MyClass& other) : data(other.data) {}\n};\n\nint main() {\nMyClass obj1(42);\nMyClass obj2 = obj1; // Uses the copy constructor\n\nreturn 0;\n}``````\n\n### 16. What is the Difference Between ‘Pass by Value’ and ‘Pass by Reference’\n\nC++\n``````// Pass by Value\nvoid modifyValue(int val) {\nval = val * 2; // Changes only the local copy\n}\n\nint main() {\nint num = 5;\nmodifyValue(num);\n// num still remains 5\n\nreturn 0;\n}\n\n// Pass by Reference\nvoid modifyReference(int& val) {\nval = val * 2; // Changes the original value\n}\n\nint main() {\nint num = 5;\nmodifyReference(num);\n// num is now 10\n\nreturn 0;\n}``````\n\n### 17. What is a template in OOP?\n\nTemplates allow the creation of generic classes or functions that can work with different data types.\n\nC++\n``````template <typename T>\nT add(T a, T b) {\nreturn a + b;\n}\n\nint main() {\nint sum1 = add(3, 5); // sum1 = 8\ndouble sum2 = add(1.5, 2.3); // sum2 = 3.8\n\nreturn 0;\n}``````\n\n### 18. Explain the term ‘aggregation’.\n\nAggregation represents a “has-a” relationship between classes, where one class contains another as a part.\n\nC++\n``````class Address {\n};\n\nclass Person {\nprivate:\n\npublic:\n};``````\n\n### 19. Explain the term ‘composition’.\n\nComposition represents a stronger form of aggregation, where one class is a part of another and has a lifecycle dependent on the containing class.\n\nC++\n``````class Engine {\n// Engine details\n};\n\nclass Car {\nprivate:\nEngine engine; // Composition: Car has an Engine\n\npublic:\nCar() : engine(Engine()) {}\n};``````\n\n### 20. What is a ‘this’ pointer? What is its use?\n\n‘this’ pointer is a pointer that refers to the current instance of the class. It is used to access members of the class when they have the same names as function parameters or to return the current object from member functions.\n\nC++\n``````class MyClass {\nprivate:\nint value;\n\npublic:\nvoid setValue(int value) {\nthis->value = value; // 'this' refers to the object's member\n}\n\nint getValue() {\nreturn this->value;\n}\n};\n\nint main() {\nMyClass obj;\nobj.setValue(42);\ncout << obj.getValue(); // Output: 42\n\nreturn 0;\n}``````\n\n### 1. Can you describe a situation where it would be appropriate to use a “friend” class or function in C++?\n\nUsing the “friend” keyword in C++ allows a class or function to access private members of another class. It should be used sparingly and with caution, as it breaks encapsulation. An appropriate situation to use “friend” is when you have a utility function or class that needs access to the private data of another class for efficiency reasons.\n\nExample:\n\nC++\n``````class MySecretClass {\nprivate:\nint secretValue;\n\npublic:\nMySecretClass(int value) : secretValue(value) {}\n\nfriend int getSecretValue(const MySecretClass& obj);\n};\n\nint getSecretValue(const MySecretClass& obj) {\nreturn obj.secretValue;\n}\n\nint main() {\nMySecretClass secretObj(42);\nint value = getSecretValue(secretObj);\n// value will be 42, accessed the private member using a friend function.\nreturn 0;\n}``````\n\n### 2. How does polymorphism promote extensibility?\n\nPolymorphism allows objects of different classes to be treated as objects of a common base class, enabling dynamic dispatch of function calls based on the actual type of the object. This promotes extensibility, as new derived classes can be added without modifying existing code, making it easier to introduce new behaviors.\n\nExample:\n\nC++\n``````class Shape {\npublic:\nvirtual void draw() const = 0;\n};\n\nclass Circle : public Shape {\npublic:\nvoid draw() const override {\n// Draw circle here.\n}\n};\n\nclass Square : public Shape {\npublic:\nvoid draw() const override {\n// Draw square here.\n}\n};\n\nvoid drawShape(const Shape& shape) {\nshape.draw(); // Calls the appropriate draw() based on the actual object type.\n}\n\nint main() {\nCircle circle;\nSquare square;\n\ndrawShape(circle); // Draws a circle.\ndrawShape(square); // Draws a square.\n\nreturn 0;\n}``````\n\n### 3. What are virtual destructors, and when would you use one?\n\nA virtual destructor is a destructor declared with the “virtual” keyword in the base class. When you have a class hierarchy and intend to delete derived class objects through a pointer to the base class, the base class’s destructor should be made virtual to ensure proper cleanup of resources.\n\nExample:\n\nC++\n``````class Base {\npublic:\nvirtual ~Base() {\n// Virtual destructor allows proper cleanup when deleting through base pointer.\n}\n};\n\nclass Derived : public Base {\npublic:\n~Derived() {\n// Derived destructor code here.\n}\n};\n\nint main() {\nBase* ptr = new Derived();\n// Use the object through the base pointer.\n\ndelete ptr; // Calls the proper destructors (Derived and then Base).\nreturn 0;\n}``````\n\n### 4. Explain the concept of pure virtual functions. What is an abstract class, and when should it be used?\n\nA pure virtual function is a virtual function in the base class that is declared with “= 0” and has no implementation. A class containing at least one pure virtual function becomes an abstract class, and objects cannot be created directly from it. Instead, it serves as a base for derived classes to implement the pure virtual functions.\n\nExample:\n\nC++\n``````class Shape {\npublic:\nvirtual void draw() const = 0; // Pure virtual function.\n\n// Non-pure virtual functions are optional in an abstract class.\nvirtual void printArea() const {\nstd::cout << \"Area: \" << area() << std::endl;\n}\n\n// Abstract classes can have regular functions with implementation.\ndouble area() const {\nreturn 0.0; // Base implementation; will be overridden in derived classes.\n}\n};\n\nclass Circle : public Shape {\npublic:\nvoid draw() const override {\n// Draw circle here.\n}\n\ndouble area() const override {\n// Calculate circle area here.\n}\n};\n\nclass Square : public Shape {\npublic:\nvoid draw() const override {\n// Draw square here.\n}\n\ndouble area() const override {\n// Calculate square area here.\n}\n};\n\nint main() {\n// Shape* shape = new Shape(); // Error, cannot create an instance of an abstract class.\nShape* circle = new Circle();\nShape* square = new Square();\n\ncircle->draw();\ncircle->printArea();\n\nsquare->draw();\nsquare->printArea();\n\ndelete circle;\ndelete square;\nreturn 0;\n}``````\n\n### 5. What is the difference between composition and aggregation?\n\nComposition and aggregation are both forms of association between classes, but they differ in ownership and lifetime relationships.\n\nComposition implies a strong ownership relationship where the lifetime of the part (member object) is controlled by the whole (containing object). In aggregation, the containing object may reference or use the aggregated object, but it doesn’t have sole responsibility for the aggregated object’s lifetime.\n\nExample of Composition:\n\nC++\n``````class Engine {\n// Engine class implementation.\n};\n\nclass Car {\nprivate:\nEngine engine; // Car owns an Engine object.\n\npublic:\n// Car class implementation.\n};``````\n\nExample of Aggregation:\n\nC++\n``````class Person {\n// Person class implementation.\n};\n\nclass Company {\nprivate:\nstd::vector<Person*> employees; // Company references Person objects, but doesn't own them.\n\npublic:\n// Company class implementation.\n};``````\n\n### 6. What is the use of the “volatile” keyword in C++? What implications does it have on compiler optimizations?\n\nThe “volatile” keyword is used to indicate that a variable can be modified by external sources not explicitly known to the compiler. It informs the compiler not to perform any optimizations (e.g., caching) that could interfere with the variable’s actual value.\n\nExample:\n\nC++\n``````volatile int sensorValue; // Declaring a volatile variable.\n\n// Assume some external hardware updates the sensorValue.\n// Without volatile, the compiler might optimize this loop as the value doesn't change in the loop.\nwhile (sensorValue < 100) {\n// Do something based on sensorValue.\n}\n}``````\n\n### 7. Explain the diamond problem in multiple inheritance. How is it addressed in C++ and in other OOP languages like Java?\n\nThe diamond problem occurs when a class inherits from two classes, and both those classes inherit from a common base class. This creates an ambiguity in the hierarchy, as the derived class has two copies of the base class, leading to conflicts in member access.\n\nExample:\n\nC++\n``````class A {\npublic:\nvoid doSomething() { /* A's implementation */ }\n};\n\nclass B : public A {\n};\n\nclass C : public A {\n};\n\nclass D : public B, public C {\n};\n\nint main() {\nD obj;\n// obj.doSomething(); // Error, ambiguity due to diamond problem.\nreturn 0;\n}``````\n\nIn C++, this problem is addressed by using virtual inheritance.\n\nThe virtual keyword is used to specify that there should be only one instance of the base class shared by all the derived classes.\n\nC++\n``````class A {\npublic:\nvoid doSomething() { /* A's implementation */ }\n};\n\nclass B : virtual public A {\n};\n\nclass C : virtual public A {\n};\n\nclass D : public B, public C {\n};\n\nint main() {\nD obj;\nobj.doSomething(); // No ambiguity due to virtual inheritance.\nreturn 0;\n}``````\n\nJava automatically addresses the diamond problem by allowing a class to inherit from only one concrete class (single inheritance) but supporting multiple interface implementations.\n\n### 8. In what scenario would you use multiple inheritance instead of interfaces?\n\nMultiple inheritance is used when a class needs to inherit functionality from multiple base classes, and those base classes provide implementation along with data. If the goal is to share the implementation code among multiple classes, multiple inheritance is preferred.\n\nExample:\n\nC++\n``````class Shape {\npublic:\nvoid draw() const {\n// Common draw code for all shapes.\n}\n};\n\nclass Color {\npublic:\nvoid setColor(int r, int g, int b) {\n// Set color implementation.\n}\n};\n\nclass ColoredShape : public Shape, public Color {\n// ColoredShape inherits both draw() from Shape and setColor() from Color.\n};\n\nint main() {\nColoredShape cShape;\ncShape.draw();\ncShape.setColor(255, 0, 0);\nreturn 0;\n}``````\n\n### 9. What is a covariant return type in Java?\n\nCovariant return types allow a method in a subclass to return a more specific type than the method in the superclass. This feature was introduced in Java 5 to improve readability and make the code more concise.\n\nExample:\n\nJava\n``````class Fruit {\n// Fruit class implementation.\n}\n\nclass Apple extends Fruit {\n// Apple class implementation.\n}\n\npublic Fruit getFruit() {\nreturn new Fruit();\n}\n}\n\n@Override\npublic Apple getFruit() {\nreturn new Apple(); // Covariant return type allows returning a more specific type.\n}\n}``````\n\n### 10. How does the Law of Demeter apply to Object-Oriented design?\n\nThe Law of Demeter (LoD) states that an object should only interact with its immediate connections and not with the objects further away in the class hierarchy. In simpler terms, it encourages limiting the number of direct dependencies between classes to reduce coupling and make the code easier to maintain and understand.\n\nExample:\n\nC++\n``````class Engine {\npublic:\nvoid start() {\n// Engine start implementation.\n}\n};\n\nclass Car {\nprivate:\nEngine engine;\n\npublic:\nvoid startCar() {\nengine.start(); // Following the Law of Demeter, Car should interact with its immediate component (Engine).\n}\n};``````\n\n### 11. How can you implement Object-Oriented Programming in languages that are not inherently object-oriented?\n\nIn languages that are not inherently object-oriented, you can still simulate object-oriented programming using structures or records to represent objects and functions to model methods. You’ll need to manually manage memory and dispatch function calls.\n\nExample in C:\n\nC++\n``````typedef struct {\nint length;\nint width;\n} Rectangle;\n\nvoid drawRectangle(Rectangle* rect) {\n// Drawing code for the rectangle.\n}\n\nint calculateArea(Rectangle* rect) {\nreturn rect->length * rect->width;\n}\n\nint main() {\nRectangle myRect = {10, 5};\ndrawRectangle(&myRect);\nint area = calculateArea(&myRect);\n// Use area and perform other operations.\nreturn 0;\n}``````\n\n### 12. In Java, is it possible to inherit from two classes? If not, how would you work around this?\n\nNo, in Java, a class can only inherit from a single direct superclass (single inheritance). However, you can achieve similar functionality by using interfaces, which allow a class to implement multiple interfaces and provide implementation for their methods.\n\nExample:\n\nJava\n``````interface Shape {\nvoid draw();\n}\n\ninterface Color {\nvoid setColor(int r, int g, int b);\n}\n\nclass ColoredShape implements Shape, Color {\n@Override\npublic void draw() {\n// Draw the colored shape.\n}\n\n@Override\npublic void setColor(int r, int g, int b) {\n// Set color implementation.\n}\n}``````\n\n### 13. What is operator overloading, and what are some good practices and pitfalls when using it?\n\nOperator overloading allows you to define custom behaviors for operators when used with user-defined types. It can improve code readability and make the class interface more intuitive. However, it should be used with caution to avoid confusing code and ensure consistency with the standard meaning of operators.\n\nExample:\n\nC++\n``````class ComplexNumber {\npublic:\ndouble real;\ndouble imag;\n\nComplexNumber(double r, double i) : real(r), imag(i) {}\n\nComplexNumber operator+(const ComplexNumber& other) const {\nreturn ComplexNumber(real + other.real, imag + other.imag);\n}\n\n// Other operator overloads, such as -, *, /, etc., can also be implemented.\n};\n\nint main() {\nComplexNumber num1(3.0, 4.0);\nComplexNumber num2(1.5, 2.5);\nComplexNumber result = num1 + num2; // Uses the overloaded + operator.\n\nreturn 0;\n}``````\n\nGood practices:\n\n• Overload operators to maintain their intuitive meaning.\n• Provide consistent behavior with built-in types.\n\nPitfalls:\n\n• Overusing operator overloading, which can make the code harder to understand.\n\n### 14. Can you explain a practical example of when and why you would use a nested or inner class?\n\nNested or inner classes are used when one class should be closely related to and used only within another class. It provides better encapsulation and helps to organize the code better.\n\nExample:\n\nJava\n``````class Outer {\nprivate int outerData;\n\nclass Inner {\nprivate int innerData;\n\nInner(int innerData) {\nthis.innerData = innerData;\n}\n\nint getSum() {\nreturn outerData + innerData; // Inner class can access outer class members.\n}\n}\n\nOuter(int outerData) {\nthis.outerData = outerData;\n}\n\nInner createInner(int innerData) {\nreturn new Inner(innerData);\n}\n}\n\npublic class Main {\npublic static void main(String[] args) {\nOuter outerObj = new Outer(10);\nOuter.Inner innerObj = outerObj.createInner(5);\nint sum = innerObj.getSum(); // Accessing inner class method.\n\nSystem.out.println(\"Sum: \" + sum); // Output: Sum: 15\n}\n}``````\n\nIn this example, the `Inner` class is closely related to the `Outer` class and requires access to its members. It’s used to perform a specific calculation involving both `Inner` and `Outer` data. By using an inner class, we keep the code more organized and encapsulate the `Inner` class’s functionality within the context of the `Outer` class.\n\n### 15. What is the problem with multiple inheritance, and how does the “interface” concept help resolve this issue?\n\nThe problem with multiple inheritance is the potential for the diamond problem, where a class inherits from two or more classes, which in turn inherit from the same base class. This creates ambiguity and can lead to conflicts in member access and function calls.\n\nThe “interface” concept, like in Java, helps to resolve this issue by providing a way to achieve multiple inheritance of type without the inheritance of implementation. An interface specifies a set of methods that a class implementing the interface must provide, ensuring a contract for behavior without introducing conflicts.\n\nExample:\n\nJava\n``````interface Animal {\nvoid makeSound();\n}\n\ninterface Flyable {\nvoid fly();\n}\n\nclass Bird implements Animal, Flyable {\n@Override\npublic void makeSound() {\nSystem.out.println(\"Chirp Chirp\");\n}\n\n@Override\npublic void fly() {\nSystem.out.println(\"Bird is flying\");\n}\n}\n\nclass Bat implements Animal, Flyable {\n@Override\npublic void makeSound() {\nSystem.out.println(\"Screech\");\n}\n\n@Override\npublic void fly() {\nSystem.out.println(\"Bat is flying\");\n}\n}\n\npublic class Main {\npublic static void main(String[] args) {\nBird bird = new Bird();\nbird.makeSound();\nbird.fly();\n\nBat bat = new Bat();\nbat.makeSound();\nbat.fly();\n}\n}``````\n\nIn this example, both `Bird` and `Bat` classes implement both `Animal` and `Flyable` interfaces, which specify the behavior they must provide. This allows us to achieve multiple inheritance of type without running into the diamond problem.\n\n### 16. What is the use of a private constructor? When would you use it?\n\nA private constructor is used to prevent the creation of objects of a class directly by users. It is often used in the singleton design pattern to ensure that only one instance of the class can be created.\n\nExample of Singleton using a private constructor:\n\nC++\n``````class Singleton {\nprivate:\nstatic Singleton* instance;\nSingleton() {} // Private constructor.\n\npublic:\nstatic Singleton* getInstance() {\nif (!instance) {\ninstance = new Singleton();\n}\nreturn instance;\n}\n};\n\nSingleton* Singleton::instance = nullptr;\n\nint main() {\nSingleton* singleton1 = Singleton::getInstance();\nSingleton* singleton2 = Singleton::getInstance();\n\n// Both pointers will point to the same instance.\nif (singleton1 == singleton2) {\nstd::cout << \"Same instance.\" << std::endl;\n}\n\nreturn 0;\n}``````\n\nIn this example, the `Singleton` class has a private constructor, and the only way to get an instance of it is through the `getInstance()` static method, which ensures that only one instance is created and returned.\n\n### 17. What is a virtual function, and why is it important in a base class?\n\nA virtual function is a function declared in the base class with the `virtual` keyword. It allows the function to be overridden in derived classes, enabling dynamic dispatch of function calls based on the actual object type.\n\nExample:\n\nC++\n``````class Shape {\npublic:\nvirtual void draw() const {\n// Base implementation for draw().\n}\n};\n\nclass Circle : public Shape {\npublic:\nvoid draw() const override {\n// Circle-specific draw implementation.\n}\n};\n\nclass Square : public Shape {\npublic:\nvoid draw() const override {\n// Square-specific draw implementation.\n}\n};\n\nint main() {\nCircle circle;\nSquare square;\n\nShape* shape1 = &circle;\nShape* shape2 = &square\n\nshape1->draw(); // Calls the draw() function of the Circle class.\nshape2->draw(); // Calls the draw() function of the Square class.\n\nreturn 0;\n}``````\n\nIn this example, the `draw()` function in the base class `Shape` is virtual, allowing it to be overridden in the derived classes `Circle` and `Square`. The function call is resolved dynamically at runtime based on the actual object type, facilitating polymorphism.\n\n### 18. In the context of Object-Oriented design, what does SOLID stand for?\n\nSOLID is an acronym for five principles of Object-Oriented Design, aimed at creating maintainable and scalable software.\n\n1. Single Responsibility Principle (SRP): A class should have only one reason to change, meaning it should have only one responsibility.\n\nExample:\n\nC++\n``````class Shape {\npublic:\nvoid draw() const {\n// Draw the shape.\n}\n\ndouble calculateArea() const {\n// Calculate the area of the shape.\n}\n};``````\n1. Open/Closed Principle (OCP): Software entities should be open for extension but closed for modification. New functionality should be added without altering existing code.\n\nExample:\n\nC++\n``````class Shape {\npublic:\nvirtual double calculateArea() const = 0; // Open for extension through derived classes.\n};``````\n1. Liskov Substitution Principle (LSP): Subtypes should be substitutable for their base types without affecting the program’s correctness.\n\nExample:\n\nC++\n``````class Rectangle {\npublic:\nvirtual void setWidth(int width) { /* Implementation */ }\nvirtual void setHeight(int height) { /* Implementation */ }\n};\n\nclass Square : public Rectangle {\npublic:\nvoid setWidth(int width) override { /* Implementation */ }\nvoid setHeight(int height) override { /* Implementation */ }\n};``````\n1. Interface Segregation Principle (ISP): Clients should not be forced to depend on interfaces they do not use. It’s better to have specific interfaces rather than one large interface.\n\nExample:\n\nC++\n``````class Printer {\npublic:\nvirtual void print() = 0;\n};\n\nclass Scanner {\npublic:\nvirtual void scan() = 0;\n};\n\nclass MultiFunctionDevice : public Printer, public Scanner {\npublic:\nvoid print() override { /* Implementation */ }\nvoid scan() override { /* Implementation */ }\n};``````\n1. Dependency Inversion Principle (DIP): High-level modules should not depend on low-level modules; both should depend on abstractions. Abstractions should not depend on details; details should depend on abstractions.\n\nExample:\n\nC++\n``````class Database {\n// Implementation details.\n};\n\nclass UserRepository {\nprivate:\nDatabase* db; // Dependency on the Database implementation.\n\npublic:\nUserRepository(Database* db) : db(db) {}\n\n// Repository methods using the Database.\n};``````\n\nIn this example, the `UserRepository` is directly dependent on the `Database` implementation, violating the DIP. To follow the principle, we can introduce an abstraction, such as an interface or a base class, to decouple the high-level module from the low-level details.\n\n### 19. Explain the concept of Reflection in Java and C#.\n\nReflection is a feature in Java and C# that allows a program to examine its own structure at runtime. It enables the inspection and manipulation of classes, methods, fields, and interfaces, even if they were not known at compile-time.\n\nExample in Java:\n\nJava\n``````import java.lang.reflect.*;\n\nclass MyClass {\nprivate int value;\npublic void setValue(int value) { this.value = value; }\npublic int getValue() { return value; }\n}\n\npublic class Main {\npublic static void main(String[] args) {\nMyClass obj = new MyClass();\n\n// Using Reflection to get and set the private field \"value\".\ntry {\nField field = obj.getClass().getDeclaredField(\"value\");\nfield.setAccessible(true);\n\nfield.setInt(obj, 42);\nSystem.out.println(\"Value: \" + field.getInt(obj));\n} catch (NoSuchFieldException | IllegalAccessException e) {\ne.printStackTrace();\n}\n}\n}``````\n\nIn this example, we use Reflection to access and modify the private field “value” in the `MyClass` instance, which we wouldn’t normally be able to do directly.\n\n### 20. What are some issues with the OOP paradigm, and how might they be resolved?\n\nSome issues with the OOP paradigm include:\n\n• Overuse of inheritance: Deep inheritance hierarchies can lead to tight coupling and difficult-to-maintain code. Consider favoring composition and interfaces over inheritance when possible.\n• Lack of flexibility in some languages: Some languages don’t support multiple inheritance or don’t provide enough features for modern OOP. In such cases, using design patterns and interfaces can help work around these limitations.\n• Performance overhead: Object-oriented designs can sometimes introduce performance overhead due to dynamic dispatch and object creation. In performance-critical scenarios, consider using more optimized data structures and algorithms.\n• Complexity and overengineering: Overusing design patterns and creating excessively abstract class hierarchies can lead to unnecessary complexity. Follow the KISS (Keep It Simple, Stupid) principle and YAGNI (You Ain’t Gonna Need It) to avoid overengineering.\n• Encapsulation violations: Improper use of access modifiers can lead to encapsulation violations, breaking the principle of information hiding. Ensure that classes expose only necessary interfaces and keep their internal details hidden.\n\n## MCQ Questions\n\n### 1. What is OOPs?\n\na) Object-Oriented Programming System\nb) Object-Oriented Programming Structure\nc) Object-Oriented Programming Style\nd) Object-Oriented Programming Syntax\n\na) Encapsulation\nb) Inheritance\nc) Polymorphism\nd) Compilation\n\n### 3. What is the main purpose of encapsulation in OOPs?\n\na) Data hiding\nb) Code reusability\nc) Code organization\nd) Code optimization\n\na) “is a”\nb) “has a”\nc) “uses a”\nd) “contains a”\n\na) this\nb) new\nc) abstract\nd) final\n\n### 6. What is the purpose of the super keyword in Java?\n\na) Accessing the superclass methods and variables\nb) Initializing the superclass object\nc) Calling the constructor of the superclass\nd) Modifying the superclass behavior\n\nAnswer: a) Accessing the superclass methods and variables\n\na) Inheritance\nb) Encapsulation\nc) Polymorphism\nd) Abstraction\n\na) private\nb) protected\nc) public\nd) default\n\n### 9. What is the purpose of the final keyword in Java?\n\na) It is used to define a constant value.\nb) It is used to prevent inheritance.\nc) It is used to prevent method overriding.\nd) All of the above.\n\nAnswer: d) All of the above.\n\n### 10. Which of the following is an example of runtime polymorphism in Java?\n\nb) Method overriding\nc) Method hiding\nd) Method abstracting\n\na) Inheritance\nb) Encapsulation\nc) Polymorphism\nd) Interface\n\n### 12. What is the output of the following code?\n\nJava\n``````class A {\nvoid display() {\nSystem.out.println(\"Class A\");\n}\n}\n\nclass B extends A {\nvoid display() {\nSystem.out.println(\"Class B\");\n}\n}\n\npublic class Main {\npublic static void main(String[] args) {\nA obj = new B();\nobj.display();\n}\n}``````\n\na) Class A\nb) Class B\nc) Compile-time error\nd) Runtime error\n\na) static\nb) final\nc) abstract\nd) private\n\na) Inheritance\nb) Encapsulation\n\nc) Polymorphism\nd) Abstraction\n\na) Aggregation\nb) Composition\nc) Inheritance\nd) Association\n\n### 16. What is the output of the following code?\n\nJava\n``````interface MyInterface {\ndefault void display() {\nSystem.out.println(\"Interface\");\n}\n}\n\nclass MyClass implements MyInterface {\npublic static void main(String[] args) {\nMyClass obj = new MyClass();\nobj.display();\n}\n}``````\n\na) Interface\nb) MyClass\nc) Compile-time error\nd) Runtime error\n\na) Abstraction\nb) Encapsulation\nc) Polymorphism\nd) Inheritance\n\na) _variable\nb) 123abc\nc) \\$name\nd) firstName\n\na) this\nb) new\nc) class\nd) void\n\n### 20. Which of the following is not a type of exception in Java?\n\na) Checked exception\nb) Unchecked exception\nc) Runtime exception\nd) Custom exception\n\nCheck Also\nClose"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7385756,"math_prob":0.68974626,"size":42773,"snap":"2023-40-2023-50","text_gpt3_token_len":9686,"char_repetition_ratio":0.14564288,"word_repetition_ratio":0.115185566,"special_character_ratio":0.24424286,"punctuation_ratio":0.16281185,"nsfw_num_words":4,"has_unicode_error":false,"math_prob_llama3":0.9526269,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T18:44:11Z\",\"WARC-Record-ID\":\"<urn:uuid:6616c747-b49f-48c7-8d6a-f4593cea0973>\",\"Content-Length\":\"968680\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4ef37fa-35ce-4fa3-85f5-9d2e7348802e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8729e7b9-bf86-4bc9-91cc-6b1544fdb386>\",\"WARC-IP-Address\":\"172.67.212.88\",\"WARC-Target-URI\":\"https://codinginterviewpro.com/interview-questions-and-answers/all-new-60-oops-interview-questions/\",\"WARC-Payload-Digest\":\"sha1:AWFY4YLJ5I5PZ5FL25H3BZ2TLDNUKZQV\",\"WARC-Block-Digest\":\"sha1:47P7TYZJOKIAGEX6PB4JQXJKG65PASB6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506029.42_warc_CC-MAIN-20230921174008-20230921204008-00444.warc.gz\"}"} |
https://www.openml.org/d/42897 | [
"Data\nbias-correction\n\n# bias-correction\n\nactive ARFF CC-BY Visibility: public Uploaded 19-05-2021 by Meilina Reksoprodjo\nIssue #Downvotes for this reason By\n\nAuthor: Dongjin Cho,Cheolhee Yoo Source: [UCI](https://archive.ics.uci.edu/ml/datasets/Bias+correction+of+numerical+prediction+model+temperature+forecast) - 2020 Please cite: [Paper](https://www.semanticscholar.org/paper/Comparative-Assessment-of-Various-Machine-Bias-for-Cho-Yoo/a24c181f054e35e8caf4fff78949f20569cb11b2 Bias correction of numerical prediction model temperature forecast dataset This data is for the purpose of bias correction of next-day maximum and minimum air temperatures forecast of the LDAPS model operated by the Korea Meteorological Administration over Seoul, South Korea. This data consists of summer data from 2013 to 2017. The input data is largely composed of the LDAPS model's next-day forecast data, in-situ maximum and minimum temperatures of present-day, and geographic auxiliary variables. There are two outputs (i.e. next-day maximum and minimum air temperatures) in this data. Hindcast validation was conducted for the period from 2015 to 2017. ### Attribute information 1. station - used weather station number: 1 to 25 2. Date - Present day: yyyy-mm-dd ('2013-06-30' to '2017-08-30') 3. Present_Tmax - Maximum air temperature between 0 and 21 h on the present day (degrees Celsius): 20 to 37.6 4. Present_Tmin - Minimum air temperature between 0 and 21 h on the present day (degrees Celsius): 11.3 to 29.9 5. LDAPS_RHmin - LDAPS model forecast of next-day minimum relative humidity (%): 19.8 to 98.5 6. LDAPS_RHmax - LDAPS model forecast of next-day maximum relative humidity (%): 58.9 to 100 7. LDAPS_Tmax_lapse - LDAPS model forecast of next-day maximum air temperature applied lapse rate (degrees Celsius): 17.6 to 38.5 8. LDAPS_Tmin_lapse - LDAPS model forecast of next-day minimum air temperature applied lapse rate (degrees Celsius): 14.3 to 29.6 9. LDAPS_WS - LDAPS model forecast of next-day average wind speed (m/s): 2.9 to 21.9 10. LDAPS_LH - LDAPS model forecast of next-day average latent heat flux (W/m2): -13.6 to 213.4 11. LDAPS_CC1 - LDAPS model forecast of next-day 1st 6-hour split average cloud cover (0-5 h) (%): 0 to 0.97 12. LDAPS_CC2 - LDAPS model forecast of next-day 2nd 6-hour split average cloud cover (6-11 h) (%): 0 to 0.97 13. LDAPS_CC3 - LDAPS model forecast of next-day 3rd 6-hour split average cloud cover (12-17 h) (%): 0 to 0.98 14. LDAPS_CC4 - LDAPS model forecast of next-day 4th 6-hour split average cloud cover (18-23 h) (%): 0 to 0.97 15. LDAPS_PPT1 - LDAPS model forecast of next-day 1st 6-hour split average precipitation (0-5 h) (%): 0 to 23.7 16. LDAPS_PPT2 - LDAPS model forecast of next-day 2nd 6-hour split average precipitation (6-11 h) (%): 0 to 21.6 17. LDAPS_PPT3 - LDAPS model forecast of next-day 3rd 6-hour split average precipitation (12-17 h) (%): 0 to 15.8 18. LDAPS_PPT4 - LDAPS model forecast of next-day 4th 6-hour split average precipitation (18-23 h) (%): 0 to 16.7 19. lat - Latitude (degrees): 37.456 to 37.645 20. lon - Longitude (degrees): 126.826 to 127.135 21. DEM - Elevation (m): 12.4 to 212.3 22. Slope - Slope (degrees): 0.1 to 5.2 23. Solar radiation - Daily incoming solar radiation (wh/m2): 4329.5 to 5992.9 24. Next_Tmax - The next-day maximum air temperature (degrees Celsius): 17.4 to 38.9 25. Next_Tmin - The next-day minimum air temperature (degrees Celsius): 11.3 to 29.8\n\n### 25 features\n\n station numeric 25 unique values 2 missing Date string 310 unique values 2 missing Present_Tmax numeric 167 unique values 70 missing Present_Tmin numeric 155 unique values 70 missing LDAPS_RHmin numeric 7672 unique values 75 missing LDAPS_RHmax numeric 7664 unique values 75 missing LDAPS_Tmax_lapse numeric 7675 unique values 75 missing LDAPS_Tmin_lapse numeric 7675 unique values 75 missing LDAPS_WS numeric 7675 unique values 75 missing LDAPS_LH numeric 7675 unique values 75 missing LDAPS_CC1 numeric 7569 unique values 75 missing LDAPS_CC2 numeric 7582 unique values 75 missing LDAPS_CC3 numeric 7599 unique values 75 missing LDAPS_CC4 numeric 7524 unique values 75 missing LDAPS_PPT1 numeric 2812 unique values 75 missing LDAPS_PPT2 numeric 2510 unique values 75 missing LDAPS_PPT3 numeric 2356 unique values 75 missing LDAPS_PPT4 numeric 1918 unique values 75 missing lat numeric 12 unique values 0 missing lon numeric 25 unique values 0 missing DEM numeric 25 unique values 0 missing Slope numeric 27 unique values 0 missing Solar radiation numeric 1575 unique values 0 missing Next_Tmax numeric 183 unique values 27 missing Next_Tmin numeric 157 unique values 27 missing\n\n### 19 properties\n\n7752\nNumber of instances (rows) of the dataset.\n25\nNumber of attributes (columns) of the dataset.\nNumber of distinct values of the target attribute (if it is nominal).\n1248\nNumber of missing values in the dataset.\n164\nNumber of instances with at least one value missing.\n24\nNumber of numeric attributes.\n0\nNumber of nominal attributes.\n0\nPercentage of binary attributes.\n2.12\nPercentage of instances having missing values.\n0.64\nPercentage of missing values.\nAverage class difference between consecutive instances.\n96\nPercentage of numeric attributes.\n0\nNumber of attributes divided by the number of instances.\n0\nPercentage of nominal attributes.\nPercentage of instances belonging to the most frequent class.\nNumber of instances belonging to the most frequent class.\nPercentage of instances belonging to the least frequent class.\nNumber of instances belonging to the least frequent class.\n0\nNumber of binary attributes."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7910959,"math_prob":0.83946127,"size":3290,"snap":"2021-43-2021-49","text_gpt3_token_len":1017,"char_repetition_ratio":0.2008521,"word_repetition_ratio":0.21703854,"special_character_ratio":0.33708206,"punctuation_ratio":0.15712188,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9506652,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T07:15:32Z\",\"WARC-Record-ID\":\"<urn:uuid:7a4e870d-5489-4da7-9803-84b5d763c307>\",\"Content-Length\":\"34750\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f2285786-3c5c-4c0d-a7f2-329021ac5421>\",\"WARC-Concurrent-To\":\"<urn:uuid:4801b970-2d47-48d5-afc8-d54396f7cd63>\",\"WARC-IP-Address\":\"131.155.11.11\",\"WARC-Target-URI\":\"https://www.openml.org/d/42897\",\"WARC-Payload-Digest\":\"sha1:XHL575BSD2ZUPRAF6ZOFMDLXZA37EGRB\",\"WARC-Block-Digest\":\"sha1:URN2SBO3LMLEZYI6D3OSTT35UTE6YALJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585911.17_warc_CC-MAIN-20211024050128-20211024080128-00020.warc.gz\"}"} |
https://lists.freebsd.org/pipermail/freebsd-bugs/2007-June/024874.html | [
"# kern/109024: [msdosfs] mount_msdosfs: msdosfs_iconv: Operation not permitted\n\nAlexander Pyhalov alp at rsu.ru\nThu Jun 21 13:10:07 UTC 2007\n\n```The following reply was made to PR kern/109024; it has been noted by GNATS.\n\nFrom: Alexander Pyhalov <alp at rsu.ru>\nTo: bug-followup at FreeBSD.org, ggg_mail at inbox.ru\nCc:\nSubject: Re: kern/109024: [msdosfs] mount_msdosfs: msdosfs_iconv: Operation\nnot permitted\nDate: Thu, 21 Jun 2007 16:49:06 +0400\n\n<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n<html>\n<body bgcolor=3D\"#ffffff\" text=3D\"#000000\">\n<font face=3D\"Courier New, Courier, monospace\">Bug was wrongly closed,\nbecause with loaded module msdosfs_iconv.ko mount says mount_msdosfs:\nmsdosfs_iconv: Operation not permitted. <br>\nThe reason, as I understand, is in kiconv. If you compile and run the\nfollowing code at sturtup=9A ( you should run it as root, for example,\nusing rc.d), everything works correctly.<br>\n<br>\n#include <sys/stat.h><br>\n#include <stdio.h><br>\n#include <sys/iconv.h><br>\n<br>\nint main()<br>\n{<br>\n=9A=9A=9A=9A=9A=9A=9A int er;<br>\n;<br>\n=9A=9A=9A=9A=9A=9A=9A if(er)<br>\n=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A printf(\"Er=3D%d\\n\",er);<br>=\n\n>\n=9A=9A=9A=9A=9A=9A=9A if(er)<br>\n=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A printf(\"Er2=3D%d\\n\",er);<br=\n>\n=9A=9A=9A=9A=9A=9A=9A return 0;<br>\n}<br>\n<br>\n<br>\n</font>\n<pre class=3D\"moz-signature\" cols=3D\"72\">--=20\n=F3 =D5=D7=C1=D6=C5=CE=C9=C5=CD,=20\n=E1=CC=C5=CB=D3=C1=CE=C4=D2 =F0=D9=C8=C1=CC=CF=D7,\n=D3=C9=D3=D4=C5=CD=CE=D9=CA =C1=C4=CD=C9=CE=C9=D3=D4=D2=C1=D4=CF=D2 =E0=E7=\n=E9=EE=E6=EF =E0=E6=F5.\n</pre>\n</body>\n</html>\n\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5765892,"math_prob":0.7547852,"size":1828,"snap":"2019-43-2019-47","text_gpt3_token_len":840,"char_repetition_ratio":0.19188596,"word_repetition_ratio":0.032786883,"special_character_ratio":0.4119256,"punctuation_ratio":0.13475177,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99967575,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T07:07:12Z\",\"WARC-Record-ID\":\"<urn:uuid:b846baea-7d67-4d1d-aa52-52778696f74e>\",\"Content-Length\":\"5280\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ccca1415-7a44-4e71-8212-f128867f6712>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2bdf3b6-09e9-44a7-bb62-ad27db6d3534>\",\"WARC-IP-Address\":\"96.47.72.84\",\"WARC-Target-URI\":\"https://lists.freebsd.org/pipermail/freebsd-bugs/2007-June/024874.html\",\"WARC-Payload-Digest\":\"sha1:J6ZSSJTUPGZTET57J5ONJVWBLLESGUN6\",\"WARC-Block-Digest\":\"sha1:C3P6E3NWSUQLSQ75OVUTAYSMHOAYPK5U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668896.47_warc_CC-MAIN-20191117064703-20191117092703-00182.warc.gz\"}"} |
https://cstheory.stackexchange.com/questions/27460/approximation-algorithms-for-min-vector-subset-sum-over-gf2 | [
"# Approximation algorithms for min vector subset-sum over GF(2)\n\nIn this question vzn asked about the following problem, which I'll call Vector-Subset-Sum.\n\nGiven a set of vectors $v_i$ over GF(2) and a target vector $y$, is there a subset of the $v_i$ summing to $y$?\n\n1. The decision problem is in P.\n2. The natural minimization problem (finding the smallest subset) is NP-hard.\n3. It's related to the problem of finding sparse codewords in a linear code.\n\nMy question is: are there any known polynomial-time approximation algorithms for the minimization problem in (2)? I know very little about coding theory, so there may be some obvious or well-known approximation-preserving reductions to (3) that I have not seen.\n\n• By the way, in a paper with Indyk, Woodruff and Xie (drona.csa.iisc.ernet.in/~arnabb/boolcycles.pdf), we showed that the problem of finding whether there are $k$ $v_i$'s that sum to $y$ requires at least $\\min(n^{\\Omega(k)}, 2^{\\Omega(d)})$ time assuming ETH, if $v_1, \\dots, v_n$ are $n$ vectors of dimension $d$. This bound is also tight. Nov 16, 2014 at 5:42\n\nDumer, Miccancio and Sudan showed that the minimum distance of a linear code is not approximable to any constant factor in randomized polynomial time, unless $\\mathsf{NP} = \\mathsf{RP}$. The minimum distance problem is the same as your problem if the $v_i$'s form the columns of the parity check matrix and $y = 0$.\n• “The minimum distance problem is the same as your problem if the v_i's form the columns of the parity check matrix and y=0.” As is stated, this is not true. The special case of the problem in question with y=0 is easy to solve: the zero vector is always the optional solution. The minimum distance problem is equivalent to finding the solution with the second smallest weight. I think that you need a slightly more delicate randomized reduction (by adding one constraint $c^T x=1$ with a randomized chosen vector c). Nov 16, 2014 at 19:17"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9273737,"math_prob":0.9972413,"size":705,"snap":"2023-40-2023-50","text_gpt3_token_len":165,"char_repetition_ratio":0.115549214,"word_repetition_ratio":0.0,"special_character_ratio":0.22836879,"punctuation_ratio":0.09701493,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99984217,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T02:47:54Z\",\"WARC-Record-ID\":\"<urn:uuid:bc9048ff-9b18-4727-8429-bc030fe29e03>\",\"Content-Length\":\"163314\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78c250c0-351f-4235-85e4-f732c639ec8e>\",\"WARC-Concurrent-To\":\"<urn:uuid:a01b2828-670f-4d14-8c6a-007ebe92dd80>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://cstheory.stackexchange.com/questions/27460/approximation-algorithms-for-min-vector-subset-sum-over-gf2\",\"WARC-Payload-Digest\":\"sha1:CHYIFYCKXUYWE4YKBWWDVBKM7KHXLR4Y\",\"WARC-Block-Digest\":\"sha1:IA6DI74LFNTOFXTNIQEA7N5OFT4HNAIK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100710.22_warc_CC-MAIN-20231208013411-20231208043411-00233.warc.gz\"}"} |
https://islamtarihikaynaklari.com/piercing/140-pounds-in-kg.php | [
"# Bikini 140 Pounds In Kg Foton\n\nNya Inlägg\n\n• ## Proon Sex",
null,
"",
null,
"",
null,
"",
null,
"### Erotisk Convert lbs to kg - Conversion of Measurement Units Pictures\n\nPlease enable Javascript to use the Rainbow Spånga converter. How many lbs in 1 kg? The answer is 2. We assume you are converting between pound and kilogram. Use this page to learn how to convert between pounds and kilograms.\n\nType in your own numbers in the form to convert the units! You can do the reverse unit conversion from kg to lbs Fluktare, or enter Kilted Bros two units below:.\n\nThe Pounsd abbreviation: lb is a unit of mass or weight in a number of different systems, including English units, Imperial units, and United States customary units. Its size can vary 140 Pounds In Kg system to system.\n\nThe international avoirdupois pound is equal to exactly The definition of the international pound was agreed by the United States and countries of the 140 Pounds In Kg of Nations in In the United Kingdom, the use of the international pound was implemented in the Weights and Measures Act An avoirdupois pound is equal to 16 avoirdupois ounces and to exactly 7, grains.\n\nThe kilogram or kilogramme, symbol: kg is the SI base unit of mass. A gram is defined as one thousandth of a kilogram. Conversion of units describes equivalent units of mass in other systems. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type Poounds unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types.",
null,
"",
null,
"",
null,
"",
null,
"Please enable Javascript to use the unit converter. How many lbs in 1 kg? The answer is 2.",
null,
"Use this to learn how to convert between pounds and kilograms. Type in your own numbers in the form to convert the units! ›› Quick conversion chart of lbs to kg. 1 lbs to kg = kg. 5 lbs to kg = kg. 10 lbs to kg = kg. 20 lbs to kg = kg. 30 lbs to kg = kg. 40 lbs to kg = kg. 50 lbs to kg.",
null,
"",
null,
"",
null,
"Pounds (lb) = Kilograms (kg) Visit Kilograms to Pounds Conversion. Pounds: The pound or pound-mass (abbreviations: lb, lbm, lbm, ℔) is a unit of mass with several definitions. Nowadays, the common is the international avoirdupois pound which is legally defined as exactly kilograms. A pound is equal to\n\n2021 islamtarihikaynaklari.com"
] | [
null,
"https://lbs-to-kg.appspot.com/image/140.png",
null,
"https://qph.fs.quoracdn.net/main-qimg-8e802a0e9d7fb708ec431fab5d51a2a0",
null,
"https://pbs.twimg.com/media/CG_OXmfW0AAdD_n.jpg",
null,
"https://convertermaniacs.com/images/lbs-to-kg.png",
null,
"https://www.thecalculatorsite.com/images/charts/pounds-kg-84-to-195.png",
null,
"https://i2.wp.com/tim.blog/wp-content/uploads/2013/05/baseline-before-experiment-190-pounds.jpg",
null,
"https://i.pinimg.com/736x/8a/ab/f5/8aabf5b61c9cd950cc85e70cbd5c852a--affirmations-inspirational.jpg",
null,
"https://images-na.ssl-images-amazon.com/images/I/6192VP2LQtL.__AC_SY445_SX342_QL70_ML2_.jpg",
null,
"https://www.sycor.com/pub/media/wysiwyg/technical_information/KG_and_LB.png",
null,
"https://converter.ninja/images/140_lb_in_kg.jpg",
null,
"https://i.dietdoctor.com/wp-content/uploads/2015/12/Suzz2-800x804.png",
null,
"https://i2.wp.com/tim.blog/wp-content/uploads/2013/05/baseline-before-experiment-190-pounds.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89066344,"math_prob":0.9467914,"size":2141,"snap":"2021-43-2021-49","text_gpt3_token_len":518,"char_repetition_ratio":0.1483388,"word_repetition_ratio":0.1658031,"special_character_ratio":0.22886501,"punctuation_ratio":0.12785389,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9537574,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,6,null,1,null,1,null,null,null,4,null,3,null,1,null,1,null,10,null,2,null,1,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T00:38:08Z\",\"WARC-Record-ID\":\"<urn:uuid:c4f2f4eb-63b2-4b28-9cf9-2307ed2c2f36>\",\"Content-Length\":\"14105\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:817d681e-4ddf-4ef2-8dac-678f96b6c099>\",\"WARC-Concurrent-To\":\"<urn:uuid:8556f65e-c906-4f4f-aba7-e0f6f1d0d881>\",\"WARC-IP-Address\":\"172.67.182.59\",\"WARC-Target-URI\":\"https://islamtarihikaynaklari.com/piercing/140-pounds-in-kg.php\",\"WARC-Payload-Digest\":\"sha1:DLSNZ2BCTBLM2TSEDZS4XSRTMQ2NS55F\",\"WARC-Block-Digest\":\"sha1:2J7U3Y5X2QUX426B7DW3O62CKJI6DV3G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585186.33_warc_CC-MAIN-20211018000838-20211018030838-00106.warc.gz\"}"} |
https://six-sigma-online.com/index.php/component/content/article/105-six-sigma-heretic/157-performing-a-long-term-msa?Itemid=437 | [
"## Testing through time stability\n\nAhh, measurement system analysis—the basis for all our jobs because, as Lord Kelvin said, “… When you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.” How interesting it is then, that we who thrive on data so frequently don't have any proof that the numbers we're using relate to the event we are measuring—hence my past few articles about the basics of measurement system analysis in “Letting You In on a Little Secret,” on how to do a potential study in “The Mystery Measurement Theatre, and on how to do a short-term study in “Performing a Short-Term MSA Study.” The only (and most important) topic remaining is how to perform a long-term study, which is the problem I left with you last month.\n\nSo read on to see how.\n\nThe potential study told us if we even have a hope of being able to use a measurement system to generate data (numbers related to an event) as opposed to numbers (symbols manipulated by mathematicians for no apparent reason other than entertainment). The short-term study allowed us to test the system performance a little more rigorously, perhaps in preparation for using it in our process. A measurement system’s performance is quantified by:\n\n• Repeatability—the amount of variability the same system (operator, device) exhibits when measuring exactly the same thing multiple times\n• Reproducibility—the amount of variability due to different operators using the same device, or maybe the same operators using different devices\n• %R&R—a combination of the repeatability and reproducibility that tells us how easy it is for this measurement system to correctly classify product as conforming or nonconforming to our specification\n• Bias—the amount that the average measurement is off of the “true” value\n\nHowever, none of these things have any meaning if the measurement system changes through time, and neither the potential nor short-term study really tests for through-time stability. With an unstable gauge, I might convene a Six Sigma team to work on a problem that doesn’t exist, put in systems to control a process based on a random number generator, or scrap product that is conforming. Simply put, my ability to understand and control my process is completely compromised. A measurement system that is out of control is even worse than a process that is out of control, since we don’t even have a hint of what is really going on.\n\nThus the need for the long-term study, which allows us to assess in detail exactly how our measurement system is performing through time. The long-term study is the Holy Grail (not the Money Python kind) of measurement system analysis, and with it we can state with confidence that the system is producing data, and not just numbers.\n\nAs before, I gave you a link to a totally free and awesome spreadsheet that will help you with your MSA work, and I gave you some data for the following scenario:\n\nA statistical facilitator and an engineer wish to conduct a gauge capability analysis (long-term) for a particular ignition signal processing test on engine control modules. The test selected for study measures voltage which has the following specifications:\n\nIGGND = 1.4100 ± 0.0984 Volts (Specification)\n\nEight control modules are randomly selected from the production line at the plant, and run (in random order, of course) through the tester at one hour intervals (but randomly within each hour). This sequence is repeated until 25 sample measures (j = 25) of size eight (n = 8) have been collected. The assumption is that these voltages are constant and the only variation we see in remeasuring a module is gauge variation.\n\nRight off the bat, we know that we only measure this engine control module; if we measured others or to different target voltages, we would include them in the study as well. We select these eight parts and keep remeasuring the same ones each hour. We also are assuming that different operators have no effect, since we are only using one.\n\nTo start off the spreadsheet calculates the mean and standard deviation across each hour’s measurements. Regardless of the actual voltages the eight modules have, the average of those voltages must be the same, right? One way we will eventually look at the measurement error over time will be by looking at how that average moves around. Because we have eight modules, we can also calculate a standard deviation across these eight. But be careful that you understand what this is. There are two components of variability in this standard deviation, only one of which relates to gauge variability. There is some measurement error as I take a reading for each one, but there is also the fact that the eight modules are producing somewhat different voltages from each other as well. Even if there were no measurement error, we still would calculate a standard deviation due to the part differences. This second variance component is of no interest to us for the MSA, but we had better not forget about it as we go forward.",
null,
"Figure 1: Worksheet calculations\n\nFirst, we need some validation that we can use the numbers. If the measurement process is in control, then the measurements for each part over time should be in control. So we take a look at each part on an individuals chart (the limit comes from the moving ranges, which I leave out of sight for this example). In order for the moving range to relate to the dispersion, we want to check normality for all the parts:",
null,
"Figure 2: Normality tests for the eight modules—output from MVP stats\n\nWith a sample size of 25 we probably are going to rely on the skewness and kurtosis indices, and they allow us to assume the variability is distributed normally. So let’s take a look at those individual charts on all the parts.",
null,
"",
null,
"Figure 3: Individuals charts for each part through time—limits from the moving range\n\nWe do see a point outside the lower control limit on part 4 and a larger than expected moving range on part 7 (both of which we would have investigated once the long-term study was in place). But two out of 200 observations is well within what we would expect with Type I error rate of 0.0027 (the rate you get a point out of the ±3σ control limits due to chance and chance alone) so I am comfortable saying that, so far, the gauge looks stable with respect to location. If a particular part became damaged at some point, or was a lot more difficult to read than another part, it should show up on these charts.\n\nWhile individual charts are really useful, they have their weaknesses, one of which is a lack of sensitivity to pretty big shifts in the mean. Thankfully, we have those means we calculated back in the worksheet calcul figure 1 to increase the sensitivity to a global shift in the average. Remember how the standard deviation across the parts has two components of variability? That is why we can’t just do an X-bar and an s chart using the usual limits—that “s” is inflated due to the part-to-part differences and will give us limits on the mean and the s chart that are too large. We get around this by recalling that we can plot any statistic on an individuals chart—though we may have to adjust the limits for the shape of the distribution. We are in good shape (pun intended) to use the individuals chart for our means, because due to the central limit theorem, those 25 averages of eight modules will tend to be distributed normally, and therefore the moving range of the means will relate to the dispersion of the means.",
null,
"Figure 4: Means as individuals control chart—limits from the moving range\n\nFigure 4 further supports our notion that the location measurements are stable through time. If there were some sort of a shift across many or all the parts, it would show up here, so actions such as a recalibration, retaring, or dropping the gauge on the floor would show up on this chart. Due to the central limit theorem, this chart will be far more sensitive to shifts in the average than the individuals charts on each part.\n\nWe also know that control for continuous data isn't assessed by just looking at the average—we need to look at that dispersion as well. Using the same trick as with the means, we will plot the standard deviations (extra component of variability and all) on an individuals chart. If the measurement error is normal and the same for every part, then regardless of the actual voltages the standard deviations across the parts ought to be distributed close to (though not exactly) normal. (Again, this is different than the random sampling distribution of the standard deviation, which would be very clearly positively skewed.)\n\nThe upshot is that we plot the standard deviations on an individuals chart with limits from the moving ranges as well.",
null,
"Figure 5: Standard deviations across eight parts plotted as individuals—limits from the moving range\n\nHere we have one point outside of the limits, which if we were up and running with the chart, we would have reacted to by investigating. Because the measurements are already done, I am going to continue watching that control chart very closely and any statement about control is going to be provisional. The types of events that would show up here would be if there was a global change in the dispersion of the measurements—perhaps a control circuit going bad or a change in the standard operating procedure.\n\nAs with the short-term study, we also want to keep an eye on the relationship between the average measurement and the variation of the readings. We want to see no correlation between them—if we do, then the error of the measurement changes with the magnitude of what you are measuring, and so your ability to correctly classify as conforming or not changes too. We will check that with a correlation between the mean and standard deviation.",
null,
"Figure 6: Correlation between magnitude (mean) and variation (standard deviation)\n\nWe only have eight points because we only have eight different parts. Normally, we won’t do correlations on such few points, but we would have already tested this with the short-term study. (Which you DID do, right?) This is just to make sure nothing really big changed. This correlation is not significant, so we can check that off our list.\n\nAt this point we are (provisionally) saying that there is nothing terribly strange going on in our measurement system through time—it seems to be stable sliced a couple of ways, and the magnitude and dispersion are independent. We can now, finally, begin to answer our question about the repeatability and reproducibility. In our case, we only have one operator and system, so measurement error due to reproducibility is assumed to be zero. If we had tested multiple operators, the estimate of the variability due to operator would be:",
null,
"Which is pretty similar to that formula you remember from SPC. The range across the operator averages divided by our old friend d2 on the expanded d2 table with g = 1 and m = the number of appraisers. The spreadsheet is set up to handle up to two operators.\n\nAll we are looking at (in this case, to estimate) is the variability due to repeatability, which is:",
null,
"You recognize that from SPC as well I bet. The only difference is that we are taking the average of the standard deviations for each of the j = 8 modules and dividing by c4 for the number of measurements (25 here). The spreadsheet cleverly does all this for you.",
null,
"Again, we are interested in the ability of this device to correctly categorize whether a given module is in or out of spec. Our spec width was 0.1968V, and so we put that into the %R&R formula:",
null,
"We find that the measurement error alone takes up 134.74 percent of the spec.\n\nUh-oh. What is it with these measurement devices?\n\nIf I measure a part that is smack dab in the middle of the spec over time, I would see this distribution:",
null,
"Figure 8: Measurement error\n\nThat means that on any individual measurement (how we have been using this gauge up to this point) of a module that is exactly on target at 1.41, I could reasonably see a reading somewhere between 1.26 and 1.56—in or out of spec at the whim of the gauge variability.\n\nDo you remember that crazy graph that showed the probability of incorrectly classifying a part as conforming or nonconforming on a single measure? Here it is for our voltage measurement system:",
null,
"Figure 9: Probability of incorrectly classifying module conformance on a single measurement\n\nOnce again, we have a measurement system that is probably no good for making conformance decisions on a single measurement. It is stable through time, yes, but so highly variable that we stand a pretty good chance of calling good stuff bad and bad stuff good.\n\nBecause it's stable, we could conceive of taking multiple measurements to use the average of those readings to determine conformance:\n\nSay we are looking for a %R&R of 10 percent:",
null,
"Leaving us measuring the voltage 182 times to get that %R&R.\n\nAhh, that’s not gonna happen.\n\nWe need to figure out what is causing all the variability in this measurement device, or replace it with something that is capable of determining if a module is in conformance with the spec. It is also possible that the modules themselves are contributing noise to the measurements—maybe our assumption that they give the same voltage time after time is wrong. That would be a good thing to find out, too.\n\nWe have some more work to do, it seems.\n\nNote that we did not assess bias—with this gauge, it would be a waste of time since it is unusable for this specification. If we wanted to, all we have to do is get the “true” voltages of those eight modules, get the average of the true values, and see how far off that average is from our measured average. If they are statistically different, we just add or subtract the amount of bias to get an accurate (but not precise with this system) measurement. You would want to track this bias with time on an control chart as well, to make sure it was stable too.\n\nIf you have been reading these mini-dissertations on MSA, you will know that a common assumption of them all is that the measurement is not destructive—that what you are measuring remains constant with time. What do you do if that is not the case?"
] | [
null,
"https://six-sigma-online.com/images/stories/heretic_longterm.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm01.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm02.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm13.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm03.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm04.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm05.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm05.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm07.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm08.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm09.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm10.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm11.jpg",
null,
"https://six-sigma-online.com/images/stories/heretic_longterm12.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9568768,"math_prob":0.9449278,"size":14246,"snap":"2021-43-2021-49","text_gpt3_token_len":2964,"char_repetition_ratio":0.14176379,"word_repetition_ratio":0.002426203,"special_character_ratio":0.2037765,"punctuation_ratio":0.07919708,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97168946,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,8,null,8,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T03:28:21Z\",\"WARC-Record-ID\":\"<urn:uuid:0c93d2de-4932-4fd9-8f9c-bf03be5bed71>\",\"Content-Length\":\"43860\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4bcc18f8-7709-4be1-92b5-26baeb7cc922>\",\"WARC-Concurrent-To\":\"<urn:uuid:44f32347-2400-4a3f-8aa4-dfc1cdf9ea1b>\",\"WARC-IP-Address\":\"35.208.100.176\",\"WARC-Target-URI\":\"https://six-sigma-online.com/index.php/component/content/article/105-six-sigma-heretic/157-performing-a-long-term-msa?Itemid=437\",\"WARC-Payload-Digest\":\"sha1:VPAM2GXW5H4SLGKOMXHM7B2AQ7O4XJMN\",\"WARC-Block-Digest\":\"sha1:LLSIUH5QC6HUGDTAHYH45JZYELYIKCLR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585450.39_warc_CC-MAIN-20211022021705-20211022051705-00214.warc.gz\"}"} |
http://www.percentagecal.com/answer/120-is-what-percent-of-1000 | [
"#### Solution for 120 is what percent of 1000:\n\n120:1000*100 =\n\n(120*100):1000 =\n\n12000:1000 = 12\n\nNow we have: 120 is what percent of 1000 = 12\n\nQuestion: 120 is what percent of 1000?\n\nPercentage solution with steps:\n\nStep 1: We make the assumption that 1000 is 100% since it is our output value.\n\nStep 2: We next represent the value we seek with {x}.\n\nStep 3: From step 1, it follows that {100\\%}={1000}.\n\nStep 4: In the same vein, {x\\%}={120}.\n\nStep 5: This gives us a pair of simple equations:\n\n{100\\%}={1000}(1).\n\n{x\\%}={120}(2).\n\nStep 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS\n(left hand side) of both equations have the same unit (%); we have\n\n\\frac{100\\%}{x\\%}=\\frac{1000}{120}\n\nStep 7: Taking the inverse (or reciprocal) of both sides yields\n\n\\frac{x\\%}{100\\%}=\\frac{120}{1000}\n\n\\Rightarrow{x} = {12\\%}\n\nTherefore, {120} is {12\\%} of {1000}.\n\n#### Solution for 1000 is what percent of 120:\n\n1000:120*100 =\n\n(1000*100):120 =\n\n100000:120 = 833.33\n\nNow we have: 1000 is what percent of 120 = 833.33\n\nQuestion: 1000 is what percent of 120?\n\nPercentage solution with steps:\n\nStep 1: We make the assumption that 120 is 100% since it is our output value.\n\nStep 2: We next represent the value we seek with {x}.\n\nStep 3: From step 1, it follows that {100\\%}={120}.\n\nStep 4: In the same vein, {x\\%}={1000}.\n\nStep 5: This gives us a pair of simple equations:\n\n{100\\%}={120}(1).\n\n{x\\%}={1000}(2).\n\nStep 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS\n(left hand side) of both equations have the same unit (%); we have\n\n\\frac{100\\%}{x\\%}=\\frac{120}{1000}\n\nStep 7: Taking the inverse (or reciprocal) of both sides yields\n\n\\frac{x\\%}{100\\%}=\\frac{1000}{120}\n\n\\Rightarrow{x} = {833.33\\%}\n\nTherefore, {1000} is {833.33\\%} of {120}.\n\nCalculation Samples"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8230369,"math_prob":0.9997551,"size":1821,"snap":"2019-35-2019-39","text_gpt3_token_len":618,"char_repetition_ratio":0.13593836,"word_repetition_ratio":0.52684563,"special_character_ratio":0.44920373,"punctuation_ratio":0.14861462,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999949,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-23T11:29:42Z\",\"WARC-Record-ID\":\"<urn:uuid:81057a49-b9fd-4151-a871-c5d117c7e330>\",\"Content-Length\":\"10097\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:126457b3-a374-4d7c-a586-08abdbe700cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c2cea3c-2f49-4455-a37f-4d9744af121a>\",\"WARC-IP-Address\":\"217.23.5.136\",\"WARC-Target-URI\":\"http://www.percentagecal.com/answer/120-is-what-percent-of-1000\",\"WARC-Payload-Digest\":\"sha1:5DGMU5UZ4DJXKA2NS2KKHQ3EQRJ2EYLP\",\"WARC-Block-Digest\":\"sha1:REOFQIHSVWUY4QNY7OVLGBA3KMXWXACM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027318375.80_warc_CC-MAIN-20190823104239-20190823130239-00168.warc.gz\"}"} |
https://my.homecampus.com.sg/Learn/Primary-Grade-4/Decimal/Decimals-Multiplication | [
"## Multiplying Decimals with Whole Number\n\nPractice Unlimited Questions\n\n#### How to Multiply Decimals with Whole Number?\n\nStep 1: Multiply the hundredths.\n\nStep 2: Multiply and add the tenths.\n\nStep 3: Multiply and add the ones.\n\nSee working examples below.\n\n#### 1. Bob had 3 packets of butter each containing 0.4 kg butter. How much butter did Bob have altogether?\n\nMethod 1:\n0.4 + 0.4 + 0.4 = 1.2 kg\nBob had 1.2 kg of butter altogether.\n\nMethod 2:\nMultiply 0.4 kg by 3.\n0.4 × 3 = ?",
null,
"",
null,
"Bob had 1.2 kg of butter altogether.\n\n#### 2. Bob had 3 cartons of milk each containing 0.45 L of milk. How much milk did Bob have altogether?\n\nMultiply 0.45 L by 3.\n0.45 × 3 = ?",
null,
"",
null,
"Bob had 1.35 L of milk altogether.\n\n#### 3. Bob had 7 bags of flour each containing 1.85 kg flour. How much flour did Bob have altogether?\n\nMultiply 1.85 kg by 7.\n1.85 × 7 = ?",
null,
"Bob had 12.95 kg of flour altogether.\n\n#### 4. Serena bought 5 drawing books for \\$4.25 each. How much did she spend on the drawing books altogether?\n\nCost of 1 drawing book = \\$4.25\nCost of 5 drawing books = \\$4.25 × 5\n= \\$21.25\n\nShe spent \\$21.25 on the drawing books altogether.",
null,
""
] | [
null,
"https://my.homecampus.com.sg/images/notes/P4_Decimals_Multiplication_Tenths_1a.png",
null,
"https://my.homecampus.com.sg/images/notes/P4_Decimals_Multiplication_Tenths_1b.png",
null,
"https://my.homecampus.com.sg/images/notes/P4_Decimals_Multiplication_Tenths_2a.png",
null,
"https://my.homecampus.com.sg/images/notes/P4_Decimals_Multiplication_Tenths_2b.png",
null,
"https://my.homecampus.com.sg/images/notes/P4_Decimals_Multiplication_Tenths_3a.png",
null,
"https://my.homecampus.com.sg/images/notes/P4_Decimals_Multiplication_Problem_Sum.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9506132,"math_prob":0.94482887,"size":1445,"snap":"2023-40-2023-50","text_gpt3_token_len":475,"char_repetition_ratio":0.18459404,"word_repetition_ratio":0.46206897,"special_character_ratio":0.36885813,"punctuation_ratio":0.21818182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99590796,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T16:05:41Z\",\"WARC-Record-ID\":\"<urn:uuid:1404d46e-af9c-419e-8f9c-d2933a3f6213>\",\"Content-Length\":\"36907\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d6fbe67-68ef-49e9-b9f2-ed550758df1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:e8e3bd7a-c116-4ff9-adea-3cb361f7f1cc>\",\"WARC-IP-Address\":\"18.213.98.197\",\"WARC-Target-URI\":\"https://my.homecampus.com.sg/Learn/Primary-Grade-4/Decimal/Decimals-Multiplication\",\"WARC-Payload-Digest\":\"sha1:I2GBICCYOWWTPX74KYUK3V2VLHLOZYV3\",\"WARC-Block-Digest\":\"sha1:VMTMLNYBQZ6P6HBW34RER5YTX236I56S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511386.54_warc_CC-MAIN-20231004152134-20231004182134-00132.warc.gz\"}"} |
https://www.luxist.com/content?q=descartes+cube+route+login+kraft+road&nojs=1&ei=UTF-8&nocache=1&s_pt=source7&s_chn=1&s_it=rs-rhr1&s_it=rs-rhr2 | [
"# Luxist Web Search\n\n2. ### Descartes Systems Group - Wikipedia\n\nen.wikipedia.org/wiki/Descartes_Systems_Group\n\nThe Descartes Systems Group Inc. (commonly referred to as Descartes) is a Canadian multinational technology company specializing in logistics software, supply chain management software, and cloud -based services for logistics businesses. Descartes is perhaps best known for its abrupt and unexpected turnaround in the mid-2000s after coming close ...\n\n3. ### René Descartes - Wikipedia\n\nen.wikipedia.org/wiki/René_Descartes\n\nRené Descartes ( / deɪˈkɑːrt / or UK: / ˈdeɪkɑːrt /; French: [ʁəne dekaʁt] ( listen); Latinized: Renatus Cartesius; [note 3] 31 March 1596 – 11 February 1650 : 58 ) was a French philosopher, mathematician, scientist and lay Catholic who invented analytic geometry, linking the previously separate fields of ...\n\n4. ### Dimension - Wikipedia\n\nen.wikipedia.org/wiki/Dimension\n\nv. t. e. In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordinate is needed to specify a point on it – for example, the point at 5 on a number line.\n\n5. ### Think the California DMV’s driver licensing exam is tough to ...\n\nwww.aol.com/news/think-california-dmv-driver...\n\nTry taking mine. Jack Ohman. July 22, 2022, 5:00 AM. The California Department of Motor Vehicles’ written driver licensing exam has come under scrutiny for its chronically high failure rate ...\n\n6. ### Cube (algebra) - Wikipedia\n\nen.wikipedia.org/wiki/Cube_(algebra)\n\nThe cube of a number or any other mathematical expression is denoted by a superscript 3, for example 2 3 = 8 or (x + 1) 3. The cube is also the number multiplied by its square: n 3 = n × n 2 = n × n × n. The cube function is the function x ↦ x 3 (often denoted y = x 3) that maps a number to its cube. It is an odd function, as\n\n7. ### Games on AOL.com: Free online games, chat with others in real ...\n\nwww.aol.com/games/play/masque-publishing/just-words\n\nDiscover the best games on AOL.com - Free online games and chat with others in real-time.\n\n8. ### Euclidean geometry - Wikipedia\n\nen.wikipedia.org/wiki/Euclidean_geometry\n\nEuclidean geometry is a mathematical system attributed to ancient Greek mathematician Euclid, which he described in his textbook on geometry: the Elements. Euclid's approach consists in assuming a small set of intuitively appealing axioms (postulates), and deducing many other propositions ( theorems) from these."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9289494,"math_prob":0.84083575,"size":1993,"snap":"2022-27-2022-33","text_gpt3_token_len":509,"char_repetition_ratio":0.08697838,"word_repetition_ratio":0.0,"special_character_ratio":0.25338686,"punctuation_ratio":0.13527851,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96078575,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T11:31:17Z\",\"WARC-Record-ID\":\"<urn:uuid:4b679beb-479a-4487-9d43-13138e298de4>\",\"Content-Length\":\"54318\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b0821a3-013f-4c6f-8611-bf76275d704a>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b850699-6f5a-4415-956a-f950193d1d4f>\",\"WARC-IP-Address\":\"66.218.84.137\",\"WARC-Target-URI\":\"https://www.luxist.com/content?q=descartes+cube+route+login+kraft+road&nojs=1&ei=UTF-8&nocache=1&s_pt=source7&s_chn=1&s_it=rs-rhr1&s_it=rs-rhr2\",\"WARC-Payload-Digest\":\"sha1:ARIRTZ7XVSW53UG4UZDZXSIQAONHKUCU\",\"WARC-Block-Digest\":\"sha1:YFWE2QFHRCLS5W37ZFVKARQW2NSUMZFB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570921.9_warc_CC-MAIN-20220809094531-20220809124531-00241.warc.gz\"}"} |
https://tools.carboncollective.co/future-value/1014-in-26-years/ | [
"# Future Value of $1,014 in 26 Years Calculating the future value of$1,014 over the next 26 years allows you to see how much your principal will grow based on the compounding interest.\n\nSo if you want to save $1,014 for 26 years, you would want to know approximately how much that investment would be worth at the end of the period. To do this, we can use the future value formula below: $$FV = PV \\times (1 + r)^{n}$$ We already have two of the three required variables to calculate this: • Present Value (FV): This is the original$1,014 to be invested\n• n: This is the number of periods, which is 26 years\n\nThe final variable we need to do this calculation is r, which is the rate of return for the investment. With some investments, the interest rate might be given up front, while others could depend on performance (at which point you might want to look at a range of future values to assess whether the investment is a good option).\n\nIn the table below, we have calculated the future value (FV) of $1,014 over 26 years for expected rates of return from 2% to 30%. The table below shows the present value (PV) of$1,014 in 26 years for interest rates from 2% to 30%.\n\nAs you will see, the future value of $1,014 over 26 years can range from$1,696.85 to $930,175.97. Discount Rate Present Value Future Value 2%$1,014 $1,696.85 3%$1,014 $2,186.78 4%$1,014 $2,811.28 5%$1,014 $3,605.45 6%$1,014 $4,613.07 7%$1,014 $5,888.66 8%$1,014 $7,499.90 9%$1,014 $9,530.75 10%$1,014 $12,085.03 11%$1,014 $15,290.98 12%$1,014 $19,306.63 13%$1,014 $24,326.38 14%$1,014 $30,588.92 15%$1,014 $38,386.79 16%$1,014 $48,077.92 17%$1,014 $60,099.43 18%$1,014 $74,984.27 19%$1,014 $93,381.09 20%$1,014 $116,078.12 21%$1,014 $144,031.53 22%$1,014 $178,399.47 23%$1,014 $220,582.40 24%$1,014 $272,271.24 25%$1,014 $335,504.46 26%$1,014 $412,735.79 27%$1,014 $506,914.48 28%$1,014 $621,580.19 29%$1,014 $760,975.06 30%$1,014 \\$930,175.97\n\nThis is the most commonly used FV formula which calculates the compound interest on the new balance at the end of the period. Some investments will add interest at the beginning of the new period, while some might have continuous compounding, which again would require a slightly different formula.\n\nHopefully this article has helped you to understand how to make future value calculations yourself. You can also use our quick future value calculator for specific numbers."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87645936,"math_prob":0.9991121,"size":2468,"snap":"2023-14-2023-23","text_gpt3_token_len":846,"char_repetition_ratio":0.1911526,"word_repetition_ratio":0.025188917,"special_character_ratio":0.47082657,"punctuation_ratio":0.19773096,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992984,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T21:10:26Z\",\"WARC-Record-ID\":\"<urn:uuid:b8a9c308-32f0-4812-a028-4bcb0edbf9eb>\",\"Content-Length\":\"21989\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1ee7dba5-f014-44e7-96db-42404193b10e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee9b2e89-01ea-456f-b62a-9bf230a87f51>\",\"WARC-IP-Address\":\"138.197.3.89\",\"WARC-Target-URI\":\"https://tools.carboncollective.co/future-value/1014-in-26-years/\",\"WARC-Payload-Digest\":\"sha1:BS5DNK3OH4RFQVSB3OQAMDRYXMOKOPTO\",\"WARC-Block-Digest\":\"sha1:JXB65QESHOJUAXPQOQ3J5LW365O3QIHZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649343.34_warc_CC-MAIN-20230603201228-20230603231228-00016.warc.gz\"}"} |
https://mollysmtview.com/recipes/how-many-ounces-is-6-cups | [
"# How Many Ounces is 6 Cups? | Get The Helpful Answer Now\n\nDo you often find yourself in the kitchen with recipes that call for measurements such as cups, tablespoons, and ounces but you just don’t know how to convert them? Don’t worry—you’re not alone. Many people ask the same question: How many ounces is 6 cups? It is a simple unit conversion and this blog post will explain exactly how to solve the equation. After reading this article, you’ll never need another conversion calculator; it’s easy to understand all of your measuring needs.\n\n## Understanding Measuring Cups and Ounces\n\nWhen it comes to cooking or baking, measuring cups are an essential kitchen tool. But what exactly do they measure? A cup is a unit of measurement used in the United States and other countries for measuring dry ingredients (such as flour, sugar, rice, etc.) by volume. It is equivalent to 8 fluid ounces (which is approximately 23 cl). One cup is equal to approximately 0.23 liters or 8 Imperial fluid ounces.\n\nOn the other hand, ounces are a unit of measurement that typically measure weight. Ounces can be used to measure both dry and liquid ingredients; however, it’s usually preferred to use grams for dry ingredients and milliliters for liquids. One ounce is equal to 28.35 grams or 1/16th of a pound (0.0625 lbs).\n\n## Calculating from Ounces to Cups\n\nTo convert from ounces to cups, just divide the number of ounces by 8. Here is the formula:\n\nNumber of cup = number of ounces / 8\n\nFor example, if you have 24 ounces and want to know how many cups that is, you would divide 24 by 8 and get 3 cups.\n\n## Converting from Cups to Ounces\n\nConverting from cups to ounces is just as easy. The formula for this conversion is:\n\nNumber of ounces = number of cups x 8\n\nIf you have 10 cups and want to know how many ounces that is, you would multiply 10 by 8 and get 80 ounces.\n\n## How Many Ounces is 6 Cups?\n\nNow that you know how cups and ounces are measured, let’s answer the question: How many ounces is 6 cups? The answer is simple; one cup is equal to 8 fluid ounces, so 6 cups is equal to 48 fluid ounces. To make the equation easier to remember, just remember that one cup is 8 fluid ounces and multiply that number by 6.\n\n## Why Should Know How Many Ounces is 6 Cups?\n\nKnowing how many ounces is 6 cups can be helpful in a number of ways. It helps you make accurate conversions between different measuring systems and save time when cooking. In addition, it ensures that the ingredients in your recipes are measured correctly so you can get consistent results every time. This knowledge will also help ensure that your baked goods turn out exactly as they should, every time.\n\n## How to Convert 6 Cups to Ounces?\n\nNow that we understand why this conversion is important, let’s talk about how to make it happen. The equation for converting 6 cups to ounces is simple: one cup equals 8 ounces. Therefore, 6 cups is equal to 48 ounces.\n\nIf you want to convert a different number of cups into ounces, simply multiply the number of cups by 8. For instance, if a recipe calls for 12 cups of liquid, you would need 96 ounces (12 x 8 = 96).\n\n## Cup to Ounce Conversion Chart\n\nIf you’re looking for a quick reference guide to convert from cups to ounces, then look no further. Here’s a handy chart that will help make your measuring needs much easier.\n\n| Cups | Ounces |\n\n| — | —– |\n\n| 1 Cup | 8 Ounces |\n\n| 2 Cups | 16 Ounces |\n\n| 3 Cups | 24 Ounces |\n\n| 4 Cups | 32 Ounces |\n\n| 5 Cups | 40 Ounces |\n\n| 6 Cups | 48 Ounces |\n\n## Common Conversions of 6 Cups to Ounces for Dry Ingredients\n\nWhen it comes to measuring dry ingredients, the conversion isn’t as straightforward. Dry ingredients are typically measured in grams or milliliters instead of ounces. To convert 6 cups to grams, you would multiply 6 by 16 to get 96 g; and to convert 6 cups to milliliters, you would multiply 6 by 237 (which is the number of milliliters in 1 cup) to get 1422 ml.\n\n## Common Applications of 6 cups to ounces Conversion\n\nKnowing how to convert 6 cups to ounces is essential for many kitchen tasks. Here are a few examples of when this conversion is useful:\n\n• Baking: If you’re baking a cake or other recipe, it’s important to get the measurements right. Many popular recipes call for measuring ingredients in cups rather than ounces, so understanding this conversion can help you make sure your baking comes out perfectly.\n• Cooking: If a recipe calls for 6 cups of liquid, knowing how many ounces are in 6 cups can help you calculate the amount of liquid needed to make the dish. This is especially important if you’re making large, multi-ingredient dishes like soup or stew.\n• Measuring drinks: If you’re making a pitcher of iced tea or other beverage, understanding the 6 cups to ounces conversion can help you make sure that you have enough to serve your guests.\n\n## Conclusion: How Many Ounces is 6 Cups?\n\nSo if you’re ever wondering “how many ounces is 6 cups?” the answer is 48. Now that you understand the basics of measuring cups and ounces, converting between them will be a breeze. Whether you are baking a cake or making soup, you can now easily convert between different measurement units with expert accuracy.\n\n## FAQ: ounces is 6 cups\n\n### Is 32 Oz 6 cups?\n\nRest assured, 32 ounces equates to 4 cups. No need for further concern.\n\n### Is 12 oz 6 cups?\n\nLooking to convert 12 oz to cups? Simplify the process with the easiest conversion guide – 12 US fluid ounces equals 1.5 cups. Make accurate measurements a breeze with this straightforward conversion.\n\n### What is 6 cups in ounces on a digital scale?\n\nIf you’re looking for an even more precise measurement, you may be wondering what 6 cups in ounces on a digital scale would be. To get the most accurate reading, make sure to tare your scale—this will ensure that any container or plate you place on the scale is taken into account when weighing your ingredients. Once you’ve done this, you can simply put 6 cups of the ingredient onto the scale and read the measurement in ounces.\n\n### Does 6 cups in ounces of liquid equal 6 cups in ounces of dry ingredients?\n\nNo, 6 cups in ounces of liquid does not equal 6 cups in ounces of dry ingredients. This is because different ingredients have different densities—liquid ingredients are generally much more dense than dry ones. For example, 1 cup of water is equal to 8 ounces while 1 cup of flour is only 4.5 ounces. Therefore, if a recipe calls for 6 cups of liquid, you would need 48 ounces while 6 cups of flour would only require 27 ounces.\n\n### How many ounces is 6 cups of sugar?\n\nThe same equation applies for other ingredients as well. For example, how many ounces is 6 cups of sugar? The answer is the same: 6 x 8 = 48. So, 6 cups of sugar is equal to 48 ounces.\n\n### What is 6 cups in ounces of liquid?\n\nWhen measuring liquids, you may find yourself wondering what 6 cups in ounces of liquid would be. As we’ve seen in the examples above, 6 cups is equal to 48 ounces.\n\n### Is 16 Oz 6 cups?\n\nTo find out the number of cups in 16 fluid ounces, simply divide 16 by 8. Therefore, 16 ounces equals 2 cups.\n\n### How many ounces is 6 cups of water?\n\nNow that we’ve discussed the general conversion equation, let’s look at a specific example. How many ounces is 6 cups of water? To calculate this, simply multiply 6 by 8: 6 x 8 = 48. So, 6 cups of water is equal to 48 ounces.\n\n### Does 6 cups in ounces of liquid equal 6 cups in grams?\n\nNo, 6 cups in ounces of liquid does not equal 6 cups in grams. This is because the conversion between ounces and grams varies depending on the ingredient being measured. For instance, 1 cup of water is equal to 8 ounces and 236 grams while 1 cup of flour is 4.5 ounces and 120 grams. So, 6 cups of water would be equal to 48 ounces and 1392 grams while 6 cups of flour would be 27 ounces and 720 grams.\n\n### Is 6 cups in ounces more or less than 6 cups in milliliters?\n\n6 cups in ounces is typically much more than 6 cups in milliliters—this is because one cup of liquid is equal to 8 ounces and 236 milliliters. Therefore, 6 cups of liquid would be equal to 48 ounces and 1416 milliliters."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92404455,"math_prob":0.9840871,"size":9232,"snap":"2023-40-2023-50","text_gpt3_token_len":2316,"char_repetition_ratio":0.19375813,"word_repetition_ratio":0.1632537,"special_character_ratio":0.25433275,"punctuation_ratio":0.102564104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98951316,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T21:25:01Z\",\"WARC-Record-ID\":\"<urn:uuid:11b0f9af-1581-4b71-a8d6-936f6146b0d9>\",\"Content-Length\":\"212258\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:93e7e8f9-cc2f-496b-a292-a29a099c21fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:77b505f1-d638-49a5-b026-c1b7602eee6e>\",\"WARC-IP-Address\":\"67.202.92.13\",\"WARC-Target-URI\":\"https://mollysmtview.com/recipes/how-many-ounces-is-6-cups\",\"WARC-Payload-Digest\":\"sha1:UZDTXF5BAEPHD4JOJPDOKF4L3QZ6M5HB\",\"WARC-Block-Digest\":\"sha1:7IGVHOGOTU7WU537F5JB7ACJNYRLT4YT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510326.82_warc_CC-MAIN-20230927203115-20230927233115-00078.warc.gz\"}"} |
http://codeforces.com/problemset/problem/545/E | [
"E. Paths and Trees\ntime limit per test\n3 seconds\nmemory limit per test\n256 megabytes\ninput\nstandard input\noutput\nstandard output\n\nLittle girl Susie accidentally found her elder brother's notebook. She has many things to do, more important than solving problems, but she found this problem too interesting, so she wanted to know its solution and decided to ask you about it. So, the problem statement is as follows.\n\nLet's assume that we are given a connected weighted undirected graph G = (V, E) (here V is the set of vertices, E is the set of edges). The shortest-path tree from vertex u is such graph G1 = (V, E1) that is a tree with the set of edges E1 that is the subset of the set of edges of the initial graph E, and the lengths of the shortest paths from u to any vertex to G and to G1 are the same.\n\nYou are given a connected weighted undirected graph G and vertex u. Your task is to find the shortest-path tree of the given graph from vertex u, the total weight of whose edges is minimum possible.\n\nInput\n\nThe first line contains two numbers, n and m (1 ≤ n ≤ 3·105, 0 ≤ m ≤ 3·105) — the number of vertices and edges of the graph, respectively.\n\nNext m lines contain three integers each, representing an edge — ui, vi, wi — the numbers of vertices connected by an edge and the weight of the edge (ui ≠ vi, 1 ≤ wi ≤ 109). It is guaranteed that graph is connected and that there is no more than one edge between any pair of vertices.\n\nThe last line of the input contains integer u (1 ≤ u ≤ n) — the number of the start vertex.\n\nOutput\n\nIn the first line print the minimum total weight of the edges of the tree.\n\nIn the next line print the indices of the edges that are included in the tree, separated by spaces. The edges are numbered starting from 1 in the order they follow in the input. You may print the numbers of the edges in any order.\n\nIf there are multiple answers, print any of them.\n\nExamples\nInput\n3 31 2 12 3 11 3 23\nOutput\n21 2\nInput\n4 41 2 12 3 13 4 14 1 24\nOutput\n42 3 4\nNote\n\nIn the first sample there are two possible shortest path trees:\n\n• with edges 1 – 3 and 2 – 3 (the total weight is 3);\n• with edges 1 – 2 and 2 – 3 (the total weight is 2);\n\nAnd, for example, a tree with edges 1 – 2 and 1 – 3 won't be a shortest path tree for vertex 3, because the distance from vertex 3 to vertex 2 in this tree equals 3, and in the original graph it is 1."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9365114,"math_prob":0.9811386,"size":2128,"snap":"2021-31-2021-39","text_gpt3_token_len":568,"char_repetition_ratio":0.14830509,"word_repetition_ratio":0.027272727,"special_character_ratio":0.2781955,"punctuation_ratio":0.086864404,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99733746,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-27T20:08:01Z\",\"WARC-Record-ID\":\"<urn:uuid:15941efe-1d86-4d84-ae3b-9c79a9b59bfa>\",\"Content-Length\":\"58255\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a3f1e15-9a74-40a9-8099-ec1a1bb8a1e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:935896f6-2605-4b0a-8f81-dd9f30e6455c>\",\"WARC-IP-Address\":\"213.248.110.126\",\"WARC-Target-URI\":\"http://codeforces.com/problemset/problem/545/E\",\"WARC-Payload-Digest\":\"sha1:IKDDINOVYNXPF34QPNO25MTZKYNDCQUR\",\"WARC-Block-Digest\":\"sha1:T7VEIAGIYTAUJGT4DJ657IVFANOK2OUI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058467.95_warc_CC-MAIN-20210927181724-20210927211724-00694.warc.gz\"}"} |
https://www.r-bloggers.com/ggplot2-legend-part-6/ | [
"# ggplot2: Legend – Part 6\n\nApril 12, 2018\nBy\n\n### Introduction\n\nThis is the 18th post in the series Elegant Data Visualization with ggplot2.\nIn the previous post, we learnt how to modify the legend of plot when `alpha`\nis mapped to a categorical variable. In this post, we will learn to modify legend\n\n• title\n• label\n• and bar\n\nSo far, we have learnt to modify the components of a legend using `scale_*`\nfamily of functions. Now, we will use the `guide` argument and supply it\nvalues using the `guide_legend()` function.\n\n### Libraries, Code & Data\n\nWe will use the following libraries in this post:\n\nAll the data sets used in this post can be found here and code can be downloaded from here.\n\n### Title\n\n#### Title Alignment\n\nThe horizontal alignment of the title can be managed using the `title.hjust`\nargument. It can take any value between `0` and `1`.\n\n• 0 (left)\n• 1 (right)\n\nIn the below example, we align the title to the center by assigning the value\n`0.5`.\n\n``````ggplot(mtcars) + geom_point(aes(disp, mpg, color = factor(cyl))) +\nscale_color_manual(values = c(\"red\", \"blue\", \"green\"),\nguide = guide_legend(title = \"Cylinders\", title.hjust = 0.5))``````",
null,
"#### Title Alignment (Vertical)\n\nTo manage the vertical alignment of the title, use `title.vjust`.\n\n``````ggplot(mtcars) + geom_point(aes(disp, mpg, color = hp)) +\nscale_color_continuous(guide = guide_colorbar(\ntitle = \"Horsepower\", title.position = \"top\", title.vjust = 1))``````",
null,
"#### Title Position\n\nThe position of the title can be managed using `title.posiiton` argument. It\ncan be positioned at:\n\n• top\n• bottom\n• left\n• right\n``````ggplot(mtcars) + geom_point(aes(disp, mpg, color = factor(cyl))) +\nscale_color_manual(values = c(\"red\", \"blue\", \"green\"),\nguide = guide_legend(title = \"Cylinders\", title.hjust = 0.5,\ntitle.position = \"top\"))``````",
null,
"## Label\n\n### Label Position\n\nThe position of the label can be managed using the `label.position` argument.\nIt can be positioned at:\n\n• top\n• bottom\n• left\n• right\n\nIn the below example, we position the label at right.\n\n``````ggplot(mtcars) + geom_point(aes(disp, mpg, color = factor(cyl))) +\nscale_color_manual(values = c(\"red\", \"blue\", \"green\"),\nguide = guide_legend(label.position = \"right\"))``````",
null,
"### Label Alignment\n\nThe horizontal alignment of the label can be managed using the `label.hjust`\nargument. It can take any value between `0` and `1`.\n\n• 0 (left)\n• 1 (right)\n\nIn the below example, we align the label to the center by assigning the value\n`0.5`.\n\n• alignment\n• 0 (left)\n• 1 (right)\n``````ggplot(mtcars) + geom_point(aes(disp, mpg, color = factor(cyl))) +\nscale_color_manual(values = c(\"red\", \"blue\", \"green\"),\nguide = guide_legend(label.hjust = 0.5))``````",
null,
"#### Labels Alignment (Vertical)\n\nThe vertical alignment of the label can be managed using the `label.vjust`\nargument.\n\n``````ggplot(mtcars) +\ngeom_point(aes(disp, mpg, color = hp)) +\nscale_color_continuous(guide = guide_colorbar(\nlabel.vjust = 0.8))``````",
null,
"### Direction\n\nThe direction of the label can be either horizontal or veritcal and it can be\nset using the `direction` argument.\n\n``````ggplot(mtcars) + geom_point(aes(disp, mpg, color = factor(cyl))) +\nscale_color_manual(values = c(\"red\", \"blue\", \"green\"),\nguide = guide_legend(direction = \"horizontal\"))``````",
null,
"### Rows\n\nThe label can be spread across multiple rows using the `nrow` argument. In the\nbelow example, the label is spread across 2 rows.\n\n``````ggplot(mtcars) + geom_point(aes(disp, mpg, color = factor(cyl))) +\nscale_color_manual(values = c(\"red\", \"blue\", \"green\"),\nguide = guide_legend(nrow = 2))``````",
null,
"### Reverse\n\nThe order of the labels can be reversed using the `reverse` argument. We need\nto supply logical values i.e. either `TRUE` or `FALSE`. If `TRUE`, the order\nwill be reversed.\n\n``````ggplot(mtcars) + geom_point(aes(disp, mpg, color = factor(cyl))) +\nscale_color_manual(values = c(\"red\", \"blue\", \"green\"),\nguide = guide_legend(reverse = TRUE))``````",
null,
"### Putting it all together…\n\n``````ggplot(mtcars) + geom_point(aes(disp, mpg, color = factor(cyl))) +\nscale_color_manual(values = c(\"red\", \"blue\", \"green\"),\nguide = guide_legend(title = \"Cylinders\", title.hjust = 0.5,\ntitle.position = \"top\", label.position = \"right\",\ndirection = \"horizontal\", label.hjust = 0.5, nrow = 2, reverse = TRUE)\n)``````",
null,
"### Legend Bar\n\nSo far we have looked at modifying components of the legend when it acts as a\nguide for `color`, `fill` or `shape` i.e. when the aesthetics have been mapped\nto a categorical variable. In this section, you will learn about\n`guide_colorbar()` which will allow us to modify the legend when the aesthetics\nare mapped to a continuous variable.\n\n### Plot\n\nLet us start with a scatter plot examining the relationship between displacement\nand miles per gallon from the mtcars data set. We will map the color of the points\nto the `hp` variable.\n\n``````ggplot(mtcars) +\ngeom_point(aes(disp, mpg, color = hp))``````",
null,
"### Width\n\nThe width of the bar can be modified using the `barwidth` argument. It is used\ninside the `guide_colorbar()` function which itself is supplied to the `guide`\nargument of `scale_color_continuous()`.\n\n``````ggplot(mtcars) +\ngeom_point(aes(disp, mpg, color = hp)) +\nscale_color_continuous(guide = guide_colorbar(\nbarwidth = 10))``````",
null,
"### Height\n\nSimilarly, the height of the bar can be modified using the `barheight` argument.\n\n``````ggplot(mtcars) +\ngeom_point(aes(disp, mpg, color = hp)) +\nscale_color_continuous(guide = guide_colorbar(\nbarheight = 3))``````",
null,
"### Bins\n\nThe `nbin` argument allows us to specify the number of bins in the bar.\n\n``````ggplot(mtcars) +\ngeom_point(aes(disp, mpg, color = hp)) +\nscale_color_continuous(guide = guide_colorbar(\nnbin = 4))``````",
null,
"### Ticks\n\nThe ticks of the bar can be removed using the `ticks` argument and setting it\nto `FALSE`.\n\n``````ggplot(mtcars) +\ngeom_point(aes(disp, mpg, color = hp)) +\nscale_color_continuous(guide = guide_colorbar(\nticks = FALSE))``````",
null,
"### Upper/Lower Limits\n\nThe upper and lower limits of the bars can be drawn or undrawn using the\n`draw.ulim` and `draw.llim` arguments. They both accept logical values.\n\n``````ggplot(mtcars) +\ngeom_point(aes(disp, mpg, color = hp)) +\nscale_color_continuous(guide = guide_colorbar(\ndraw.ulim = TRUE, draw.llim = FALSE))``````",
null,
"### Guides: Color, Shape & Size\n\nThe `guides()` function can be used to create multiple legends to act as a\nguide for `color`, `shape`, `size` etc. as shown below. First, we map color,\nshape and size to different variables. Next, in the `guides()` function, we\nsupply values to each of the above aesthetics to indicate the type of legend.\n\n``````ggplot(mtcars) +\ngeom_point(aes(disp, mpg, color = hp,\nsize = qsec, shape = factor(gear))) +\nguides(color = \"colorbar\", shape = \"legend\", size = \"legend\")``````",
null,
"### Guides: Title\n\nTo modify the components of the different legends, we must use the\n`guide_*` family of functions. In the below example, we use `guide_colorbar()`\nfor the legend acting as guide for color mapped to a continuous variable and\n`guide_legend()` for the legends acting as guide for shape/size mapped to\ncategorical variables.\n\n``````ggplot(mtcars) +\ngeom_point(aes(disp, mpg, color = hp, size = wt, shape = factor(gear))) +\nguides(color = guide_colorbar(title = \"Horsepower\"),\nshape = guide_legend(title = \"Weight\"), size = guide_legend(title = \"Gear\")\n)``````",
null,
"### Summary\n\nIn this post, we will learn to modify legend\n\n• title\n• label\n• and bar\n\n### Up Next..\n\nIn the next post, we will learn faceting.\n\nR-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more..."
] | [
null,
"https://i2.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg2-1.png",
null,
"https://i0.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg22-1.png",
null,
"https://i0.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg3-1.png",
null,
"https://i2.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg4-1.png",
null,
"https://i1.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg5-1.png",
null,
"https://i0.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg23-1.png",
null,
"https://i2.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg6-1.png",
null,
"https://i0.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg7-1.png",
null,
"https://i2.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg8-1.png",
null,
"https://i0.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg20-1.png",
null,
"https://i1.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg36-1.png",
null,
"https://i1.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg24-1.png",
null,
"https://i2.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg25-1.png",
null,
"https://i2.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg26-1.png",
null,
"https://i1.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg28-1.png",
null,
"https://i1.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg29-1.png",
null,
"https://i0.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg34-1.png",
null,
"https://i1.wp.com/blog.rsquaredacademy.com/post/2018-04-13-legend-part-6_files/figure-html/leg35-1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5878022,"math_prob":0.985062,"size":7273,"snap":"2019-51-2020-05","text_gpt3_token_len":1929,"char_repetition_ratio":0.15504196,"word_repetition_ratio":0.2956989,"special_character_ratio":0.2642651,"punctuation_ratio":0.15929878,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9928822,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,6,null,3,null,6,null,6,null,6,null,6,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T03:22:00Z\",\"WARC-Record-ID\":\"<urn:uuid:e2e06363-a08a-40b1-9a22-365f9d3f31b5>\",\"Content-Length\":\"151730\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4df97b38-e9e1-46f6-9732-46ddd510b61d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5527eaf7-cd31-4dd6-839b-462097700434>\",\"WARC-IP-Address\":\"104.28.9.205\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/ggplot2-legend-part-6/\",\"WARC-Payload-Digest\":\"sha1:PQSLOFJ35MGSGLZX7I7HK4VKZQOT6SPS\",\"WARC-Block-Digest\":\"sha1:STVXXAKEUYRG25HADB6RJBO3WHVYMVFN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540504338.31_warc_CC-MAIN-20191208021121-20191208045121-00136.warc.gz\"}"} |
http://www.cs.utexas.edu/users/flame/laff/alaff-beta/chapter06-stability-an-algorithm-for-computing-gemv.html | [
"## Subsection6.3.4Matrix-vector multiplication\n\nAssume $A \\in \\Rmxn$ and partition\n\n\\begin{equation*} A = \\left( \\begin{array}{c} a_0^T \\\\ a_1^T \\\\ \\vdots \\\\ a_{m-1}^T \\end{array} \\right) \\quad \\mbox{and} \\quad \\left( \\begin{array}{c} \\psi_0 \\\\ \\psi_1 \\\\ \\vdots \\\\ \\psi_{m-1} \\end{array} \\right). \\end{equation*}\n\nThen\n\n\\begin{equation*} \\left( \\begin{array}{c} \\psi_0 \\\\ \\psi_1 \\\\ \\vdots \\\\ \\psi_{m-1} \\end{array} \\right) := \\left( \\begin{array}{c} a_0^T x \\\\ a_1^T x\\\\ \\vdots \\\\ a_{m-1}^T x\\\\ \\end{array} \\right). \\end{equation*}\n\nFrom R-1B 6.3.3.2 regarding the dot product we know that\n\n\\begin{equation*} \\begin{array}{l} \\check y = \\left( \\begin{array}{c} \\check \\psi_0 \\\\ \\check \\psi_1 \\\\ \\vdots \\\\ \\check \\psi_{m-1} \\end{array} \\right) = \\left( \\begin{array}{c} ( a_0 + \\delta\\!a_0)^T x \\\\ ( a_1 + \\delta\\!a_1)^T x\\\\ \\vdots \\\\ ( a_{m-1} + \\delta\\!a_{m-1})^T x\\\\ \\end{array} \\right) \\\\ ~~~= \\left( \\left( \\begin{array}{c} a_0^T \\\\ a_1^T\\\\ \\vdots \\\\ a_{m-1}^T \\end{array} \\right) + \\left( \\begin{array}{c} \\delta\\!a_0^T \\\\ \\delta\\!a_1^T \\\\ \\vdots \\\\ \\delta\\!a_{m-1}^T \\end{array} \\right) \\right) x = ( A + \\Delta\\!\\!A ) x , \\end{array} \\end{equation*}\n\nwhere $\\vert \\delta\\!a_i \\vert \\leq \\gamma_n \\vert a_i \\vert\\text{,}$ $i = 0, \\ldots , m-1 \\text{,}$ and hence $\\vert \\Delta\\!\\!A \\vert \\leq \\gamma_n \\vert A \\vert \\text{.}$ .\n\nAlso, from R-1B 6.3.3.2 regarding the dot product we know that\n\n\\begin{equation*} \\check y = \\left( \\begin{array}{c} \\check \\psi_0 \\\\ \\check \\psi_1 \\\\ \\vdots \\\\ \\check \\psi_{m-1} \\end{array} \\right) = \\left( \\begin{array}{c} a_0^T x + \\delta\\!\\psi_0\\\\ a_1^T x + \\delta\\!\\psi_1\\\\ \\vdots \\\\ a_{m-1}^T x + \\delta\\!\\psi_{m-1} \\end{array} \\right) = \\left( \\begin{array}{c} a_0^T \\\\ a_1^T\\\\ \\vdots \\\\ a_{m-1}^T \\end{array} \\right) x + \\left( \\begin{array}{c} \\delta\\!\\psi_0 \\\\ \\delta\\!\\psi_1 \\\\ \\vdots \\\\ \\delta\\!\\psi_{m-1} \\end{array} \\right) = A x + \\deltay . \\end{equation*}\n\nwhere $\\vert \\delta\\!\\psi_i \\vert \\leq \\gamma_n \\vert a_i \\vert^T \\vert x \\vert$ and hence $\\vert \\deltay \\vert \\leq \\gamma_n \\vert A \\vert \\vert x \\vert \\text{.}$\n\nThe above observations can be summarized in the following theorem:\n\n###### Ponder This6.3.4.1.\n\nIn the above theorem, could one instead prove the result\n\n\\begin{equation*} \\check y = A ( x + \\deltax ), \\end{equation*}\n\nwhere $\\deltax$ is \"small\"?\n\nSolution\n\nThe answer is \"sort of\". The reason is that for each individual element of $y$\n\n\\begin{equation*} \\check \\psi_i = a_i^T ( x + \\deltax ) \\end{equation*}\n\nwhich would appear to support that\n\n\\begin{equation*} \\left( \\begin{array}{c} \\check \\psi_0 \\\\ \\check \\psi_1 \\\\ \\vdots \\\\ \\check \\psi_{m-1} \\end{array} \\right) = \\left( \\begin{array}{c} a_0^T ( x + \\deltax ) \\\\ a_1^T ( x + \\deltax ) \\\\ \\vdots \\\\ a_{m-1}^T ( x + \\deltax ) \\end{array} \\right). \\end{equation*}\n\nHowever, the $\\deltax$ for each entry $\\check \\psi_{i}$ is different, meaning that we cannot factor out $x + \\deltax$ to find that $\\check y = A ( x + \\deltax ) \\text{.}$\n\nHowever, one could argue that we know that $\\check y = A x + \\delta\\!y$ where $\\vert \\deltay \\vert \\leq \\gamma_{n} \\vert A \\vert \\vert x \\vert \\text{.}$ Hence if $A \\delta\\!x = \\delta\\!y$ then $A ( x + \\deltax ) = \\check y \\text{.}$ This would mean that $\\delta\\!y$ is in the column space of $A \\text{.}$ (For example, if $A$ is nonsingular). However, that is not quite what we are going for here."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7843859,"math_prob":1.0000074,"size":1967,"snap":"2020-10-2020-16","text_gpt3_token_len":711,"char_repetition_ratio":0.19052471,"word_repetition_ratio":0.15454546,"special_character_ratio":0.36756483,"punctuation_ratio":0.1482412,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000095,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T01:01:38Z\",\"WARC-Record-ID\":\"<urn:uuid:47050aa0-5482-4275-ad49-b310acec840a>\",\"Content-Length\":\"29416\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8869c657-602f-4f8c-ab4b-78f845b2d68b>\",\"WARC-Concurrent-To\":\"<urn:uuid:86e6a3a0-8c1b-4881-95b0-7534c2b35c50>\",\"WARC-IP-Address\":\"128.83.120.48\",\"WARC-Target-URI\":\"http://www.cs.utexas.edu/users/flame/laff/alaff-beta/chapter06-stability-an-algorithm-for-computing-gemv.html\",\"WARC-Payload-Digest\":\"sha1:KSJUK2YRYNLSGPVN5WRTUUQM4OZKHTMA\",\"WARC-Block-Digest\":\"sha1:X25RQOQKUQB547ZNV6BW6LHEJTGKUNR6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145989.45_warc_CC-MAIN-20200224224431-20200225014431-00531.warc.gz\"}"} |
https://worksheetideasbygregory.netlify.app/equivalent-ratios-worksheets-grade-6.html | [
"# Equivalent Ratios Worksheets Grade 6",
null,
"### Equivalent Ratios With Blanks C Fractions Worksheet Razao E Proporcao Matematica",
null,
"### Equivalent Ratios Worksheet 6th Grade Sixth Grade Math Worksheets Line In 2020 Free Printable Math Worksheets Math Fact Worksheets 7th Grade Math Worksheets",
null,
"### Equivalent Ratios With Blanks A Fractions Worksheet Equivalent Ratios Fractions Worksheets Fractions",
null,
"### Equivalent Ratios With Variables B Fractions Worksheet Equivalent Ratios Fractions Worksheets Math Worksheet",
null,
"### 6th Grade Ratio Tables Worksheets Math Fractions Worksheets Algebra Worksheets Ks3 Maths Worksheets",
null,
"### Equivalent Ratios Part 2 In 2020 Play To Learn Singapore Math Printable Worksheets",
null,
"",
null,
"### Equivalent Fractions Worksheet Math Fractions Math Fractions Worksheets Fractions Worksheets",
null,
"### The Equivalent Fractions Models A Math Worksheet From The Fractions Worksheet Page At Mat Fractions Worksheets Equivalent Fractions Math Fractions Worksheets",
null,
"### Expressing Ratio As Fractions In 2020 Play To Learn Singapore Math Printable Worksheets",
null,
"### Equivalent Ratios Buku Catatan Matematika Matematika Buku",
null,
"### Fractions 36 Worksheets Equivalent Fractions Mixed Etsy Math Worksheets Fractions Worksheets Multiplying Fractions Worksheets",
null,
"### 4 Worksheet Free Math Worksheets Fourth Grade 4 Fractions Adding Mixed Numbers Like Denominat In 2020 Fractions Mixed Fractions Worksheets Improper Fractions",
null,
"### The Simplify Proper Fractions To Lowest Terms Easier Version A Math Worksheet From The Fraction Simplifying Fractions Fractions Worksheets Proper Fractions",
null,
"### Adding Fractions Worksheets What Is As A Fraction Math Fraction Kindergarten Adding Mixe Fractions Worksheets Fractions Worksheets Grade 4 Equivalent Fractions",
null,
"### Expressing Decimals As Percentage In 2020 Play To Learn Printable Worksheets Singapore Math\n\nSource : pinterest.com"
] | [
null,
"https://i.pinimg.com/originals/b3/26/f3/b326f3ee2d29d261cf47f4c79c1f4f33.jpg",
null,
"https://i.pinimg.com/originals/fa/69/ef/fa69efc40b4171b197e4414660988b6d.jpg",
null,
"https://i.pinimg.com/originals/64/b0/fb/64b0fb0f7264dfc84791c484efb98390.jpg",
null,
"https://i.pinimg.com/originals/1a/00/7c/1a007c036231147185664de942bf81c9.jpg",
null,
"https://i.pinimg.com/originals/55/24/1b/55241b4459d3102e194acde9be875bb2.jpg",
null,
"https://i.pinimg.com/originals/11/a7/84/11a78494da7f65f234717a8c4d3870b2.jpg",
null,
"https://i.pinimg.com/originals/b3/26/f3/b326f3ee2d29d261cf47f4c79c1f4f33.jpg",
null,
"https://i.pinimg.com/originals/44/f5/ad/44f5ad62c7b567fcd5cb830130345886.jpg",
null,
"https://i.pinimg.com/originals/05/55/be/0555bece57e4b56bef2323fbd022f16e.jpg",
null,
"https://i.pinimg.com/originals/94/a0/ec/94a0ec1bc9f8b5195097a8f5dd202efb.jpg",
null,
"https://i.pinimg.com/originals/d8/93/85/d89385ddf9b07b263047d6786620aa69.jpg",
null,
"https://i.pinimg.com/originals/f5/f3/27/f5f32781abce00ee5cd935d307a4f41e.jpg",
null,
"https://i.pinimg.com/originals/3c/4a/56/3c4a56efed25f2aed8737d7c27ec96d2.jpg",
null,
"https://i.pinimg.com/originals/69/ce/b5/69ceb5c325ab98047ee2b79910dff0f3.jpg",
null,
"https://i.pinimg.com/originals/8c/3c/05/8c3c05eb9ed9366f1ed397a96ee0f268.jpg",
null,
"https://i.pinimg.com/originals/fa/25/ee/fa25eea1f39c5303652d4549fa7b6067.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81753075,"math_prob":0.75289226,"size":758,"snap":"2022-05-2022-21","text_gpt3_token_len":189,"char_repetition_ratio":0.3448276,"word_repetition_ratio":0.0,"special_character_ratio":0.21108179,"punctuation_ratio":0.019417476,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973108,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,null,null,6,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,3,null,8,null,null,null,4,null,null,null,10,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T02:14:40Z\",\"WARC-Record-ID\":\"<urn:uuid:3e46bd5a-b448-4fb2-ace5-d15c10578848>\",\"Content-Length\":\"32394\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:49b8b42f-6719-4bae-a630-5bb0cb1e3764>\",\"WARC-Concurrent-To\":\"<urn:uuid:29963676-0cd6-40a7-8547-3432bd5054ec>\",\"WARC-IP-Address\":\"68.183.23.220\",\"WARC-Target-URI\":\"https://worksheetideasbygregory.netlify.app/equivalent-ratios-worksheets-grade-6.html\",\"WARC-Payload-Digest\":\"sha1:EHSRJBI2DLPMT2ZHIQ6EPQEVBS7NYHTP\",\"WARC-Block-Digest\":\"sha1:H63PBOND7SMK7QGM5MOVEGH2KYACNKD6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662515466.5_warc_CC-MAIN-20220516235937-20220517025937-00726.warc.gz\"}"} |
https://codereview.stackexchange.com/questions/132582/unicode-chess-pvp-with-move-validation | [
"# Unicode Chess PvP with Move Validation\n\n### Main Purpose\n\nThis script allows two players to play chess on a virtual chessboard printed on the screen by making use of the Unicode chess characters.\n\n### Visual appearence\n\nThe chessboard looks like this:\n\n8 ♜ ♞ ♝ ♛ ♚ ♝ ♞ ♜\n7 ♟ ♟ ♟ ♟ ♟ ♟ ♟ ♟\n6\n5\n4\n3\n2 ♙ ♙ ♙ ♙ ♙ ♙ ♙ ♙\n1 ♖ ♘ ♗ ♕ ♔ ♗ ♘ ♖\na b c d e f g h\n\n\n### User interaction\n\nMoves are performed by typing in the start position of the piece in chess notation, [ENTER], and the end position.\n\nFor example (starting from the start position):\n\nStart? e2\nEnd? e4\n\n\nResults in:\n\n8 ♜ ♞ ♝ ♛ ♚ ♝ ♞ ♜\n7 ♟ ♟ ♟ ♟ ♟ ♟ ♟ ♟\n6\n5\n4 ♙\n3\n2 ♙ ♙ ♙ ♙ ♙ ♙ ♙\n1 ♖ ♘ ♗ ♕ ♔ ♗ ♘ ♖\na b c d e f g h\n\n\n### Legality checks\n\nThis programme also performs many checks to ensure that moves are legal.\n\nIt checks:\n\n• If start and end position are both inside the board.\n• If a player tries to move an opponents piece.\n• If at the given start there is no piece.\n• If a piece is being moved unlawfully. # TO DO support castling and en-passant\n• If the end location is already occupied by a same colour piece.\n• #TO DO: Limit options if the king is in check\n\n### AI extension possibility\n\nThe main loop loops between function objects to allow a possibility of extension, introducing AI should be as easy as replacing a player_turn with an ai_turn\n\n### Board data storage\n\nThe board is represented as a dict {Point : piece}, the code board[Point(x, y)] returns the piece at position (x, y).\n\nEmpty squares are not even present in the dictionary as keys.\n\n\"\"\"\n\n-- Main Purpose\n\nThis script allows two players to play chess on a virtual chessboard\nprinted on the screen by making use of the Unicode chess characters.\n\n-- Visual appearence\n\nThe chessboard looks like this:\n\n8 ♜ ♞ ♝ ♛ ♚ ♝ ♞ ♜\n7 ♟ ♟ ♟ ♟ ♟ ♟ ♟ ♟\n6\n5\n4\n3\n2 ♙ ♙ ♙ ♙ ♙ ♙ ♙ ♙\n1 ♖ ♘ ♗ ♕ ♔ ♗ ♘ ♖\na b c d e f g h\n\n-- User interaction\n\nMoves are performed by typing in the start position of\nthe piece in chess notation, [ENTER], and the end position.\n\nFor example (starting from the start position):\n\nStart? e2\nEnd? e4\n\nResults in:\n\n8 ♜ ♞ ♝ ♛ ♚ ♝ ♞ ♜\n7 ♟ ♟ ♟ ♟ ♟ ♟ ♟ ♟\n6\n5\n4 ♙\n3\n2 ♙ ♙ ♙ ♙ ♙ ♙ ♙\n1 ♖ ♘ ♗ ♕ ♔ ♗ ♘ ♖\na b c d e f g h\n\n-- Legality checks\n\nThis programme also performs many checks to ensure that moves\nare legal.\n\nIt checks:\n\n- If start and end position are both inside the board.\n- If a player tries to move an opponents piece.\n- If at the given start there is no piece.\n- If a piece is being moved unlawfully. # TO DO support castling and en-passant\n- If the end location is already occupied by a same colour piece.\n- #TO DO: Limit options if the king is in check\n\n-- AI extension possibility\n\nThe main loop loops between function objects to allow a possibility\nof extension, introducing AI should be as easy as\nreplacing a player_turn with an ai_turn\n\n-- Board data storage\n\nThe board is represented as a dict {Point : piece}, the code\nboard[Point(x, y)] returns the piece at position (x, y).\n\nEmpty squares are not even present in the dictionary as keys.\n\n\"\"\"\n\nfrom collections import namedtuple\nfrom itertools import cycle, takewhile\n\nALPHABET = \"abcdefgh\"\nBOARD_SIZE = 8\n\nPoint = namedtuple('Point', ['x', 'y'])\n\n# Board is a dict {Point : piece}.\nboard = {\nPoint(0, 6) : \"♙\",\nPoint(1, 6) : \"♙\",\nPoint(2, 6) : \"♙\",\nPoint(3, 6) : \"♙\",\nPoint(4, 6) : \"♙\",\nPoint(5, 6) : \"♙\",\nPoint(6, 6) : \"♙\",\nPoint(7, 6) : \"♙\",\nPoint(0, 7) : \"♖\",\nPoint(1, 7) : \"♘\",\nPoint(2, 7) : \"♗\",\nPoint(3, 7) : \"♕\",\nPoint(4, 7) : \"♔\",\nPoint(5, 7) : \"♗\",\nPoint(6, 7) : \"♘\",\nPoint(7, 7) : \"♖\",\n\nPoint(0, 1) : \"♟\",\nPoint(1, 1) : \"♟\",\nPoint(2, 1) : \"♟\",\nPoint(3, 1) : \"♟\",\nPoint(4, 1) : \"♟\",\nPoint(5, 1) : \"♟\",\nPoint(6, 1) : \"♟\",\nPoint(7, 1) : \"♟\",\nPoint(0, 0) : \"♜\",\nPoint(1, 0) : \"♞\",\nPoint(2, 0) : \"♝\",\nPoint(3, 0) : \"♛\",\nPoint(4, 0) : \"♚\",\nPoint(5, 0) : \"♝\",\nPoint(6, 0) : \"♞\",\nPoint(7, 0) : \"♜\",\n\n}\n\ndef legal_by_deltas(start, end, deltas):\n\"\"\"\nGiven start and end position of a piece that moves by fixed (x, y) deltas,\nreturns if the end is reachable legally.\n\"\"\"\nreturn end in (Point(start.x + delta.x, start.y + delta.y)\nfor delta in (Point(p, p) for p in deltas))\n\ndef knight_jump(start, end, _):\n\"\"\"\nCan a knight jump from start to end?\n\nThe board is unused as the knight jumps on pieces in the middle\nof its path and the end square is already checked in make_move.\n\"\"\"\nKNIGHT_DELTAS = ( (1, 2), (2, 1), (1, -2), (2, -1), (-1, -2), (-2, -1), (-1, 2), (-2, 1) )\nreturn legal_by_deltas(start, end, KNIGHT_DELTAS)\n\ndef king_step(start, end, _):\n# TO DO: Castling.\n\"\"\"\nCan a king step from start to end?\n\nThe board is unused as the king moving one square only\ncannot have pieces in the middle\nof its path and the end square is already checked in make_move.\n\"\"\"\nKING_DELTAS =( (1, -1), (-1, 1), (0, 1), (1, 0), (-1, 0), (0, -1), (1, 1), (-1, -1) )\nreturn legal_by_deltas(start, end, KING_DELTAS)\n\ndef rook_move_ignoring_obstruction(start, end):\nreturn start.x == end.x or start.y == end.y\n\ndef rook_move(start, end, board):\n\"\"\"\nCan a rook move from start to end?\n\nAlso checks if a piece blocks the path.\n\"\"\"\nr = lambda a, b: range(a, b) if a < b else reversed(range(a, b))\n\nif start.x == end.x:\nintermediates = (Point(start.x, y) for y in r((start.y + 1), end.y))\nif start.y == end.y:\nintermediates = (Point(x, start.y) for x in r((start.x + 1), end.x))\n\nreturn rook_move_ignoring_obstruction(start, end) and all(is_empty(s, board) for s in intermediates)\n\ndef bishop_move_ignoring_obstruction(start, end):\ndelta_x = end.x - start.x\ndelta_y = end.y - start.y\nreturn abs(delta_x) == abs(delta_y)\n\ndef bishop_move(start, end, board):\n\"\"\"\nCan a bishop move from start to end?\n\"\"\"\ndelta_x = end.x - start.x\ndelta_y = end.y - start.y\nif delta_x > 0 and delta_y > 0:\nps = ((1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7))\nif delta_x > 0 and delta_y < 0:\nps = ((1, -1), (2, -2), (3, -3), (4, -4), (5, -5), (6, -6), (7, -7))\nif delta_x < 0 and delta_y > 0:\nps = ((-1, 1), (-2, 2), (-3, 3), (-4, 4), (-5, 5), (-6, 6), (-7, 7))\nif delta_x < 0 and delta_y < 0:\nps = ((-1, -1), (-2, -2), (-3, -3), (-4, -4), (-5, -5), (-6, -6), (-7, -7))\n\nintermediates = list(takewhile(lambda x: x != end, (Point(start.x + p, start.y + p) for p in ps)))\nreturn bishop_move_ignoring_obstruction(start, end) and all(is_empty(s, board) for s in intermediates)\n\ndef is_empty(square, board):\n\"\"\"\nIs the given square (Point object) empty on the board?\n\nBeing the board a dictionary where the full squares are the keys,\nsquare in board tells us if the square is full.\n\nNo false values are allowed, all pieces strings are True.\n\"\"\"\n\nreturn square not in board\n\ndef is_piece_of_color(square, board, color):\nreturn (not is_empty(square, board)) and is_white(board[square]) == color\n\ndef pawn_move(start, end, board, color):\n\"\"\"\nCan a pawn move from start to end?\n\nNote that this function requires the colour of the pawn,\nas the pawns are the only piece that cannot go back.\n\"\"\"\n# To-do en-passant\nop = sub if color else add\nstart_y = 6 if color else 1\n\none_away = Point(start.x, op(start.y, 1))\ntwo_away = Point(start.x, op(start.y, 2))\n\nif end.x == start.x: # Normal : not capturing\nif end == one_away:\nreturn True\nif start.y == start_y: # Never moved\nreturn end in (one_away, two_away)\nif end.x not in (start.x + 1, start.x, start.x - 1): # No more than one step diagonally\nreturn False\n\n# Capturing\none_away_right = Point(one_away.x + 1, one_away.y)\none_away_left = Point(one_away.x - 1, one_away.y)\nif is_piece_of_color(end, board, not color) and end in (one_away_right, one_away_left):\nreturn True\nreturn True\n\ndef white_pawn_move(start, end, board):\nreturn pawn_move(start, end, board, True)\n\ndef black_pawn_move(start, end, board):\nreturn pawn_move(start, end, board, False)\n\ndef queen_move(start, end, board):\nreturn rook_move(start, end, board) or bishop_move(start, end, board)\n\n# En-passant and castling validations to be perfected later being more complex.\n# Validation for black and white is equal for all but pawns,\n# as pawns cannot go back and are asymmetrical.\nPIECE_TO_MOVE_VALIDATION = {\n\"♙\" : white_pawn_move,\n\"♘\" : knight_jump,\n\"♗\" : bishop_move,\n\"♕\" : queen_move,\n\"♔\" : king_step,\n\"♖\" : rook_move,\n\n\"♟\" : black_pawn_move,\n\"♞\" : knight_jump,\n\"♝\" : bishop_move,\n\"♛\" : queen_move,\n\"♚\" : king_step,\n\"♜\" : rook_move\n}\n\ndef print_board(board):\n\"\"\"\nPrints the given board : dict{Point:piece} in a human readable format\nand adds notation letters and numbers to the side to aid the user in\ninputting their moves.\n\nSee __doc__ at the top, section Visual appearence to see an example output.\n\"\"\"\nfor y in range(BOARD_SIZE):\nprint(BOARD_SIZE - y, end=\" \")\nfor x in range(BOARD_SIZE):\nprint(board[Point(x, y)] if Point(x, y) in board else \" \", end=\" \")\nprint(\"\\n\",end=\"\")\nprint(\" \" + ' '.join(ALPHABET) + \"\\n\")\n\ndef is_white(piece):\n\"\"\" Is the given piece white? \"\"\"\nreturn piece in \"♙♘♗♖♕♔\"\n\ndef make_move(board, start, end, turn):\n\"\"\"\nPerforms the validations listed in the main __doc__\nsection Legality checks and actuates the move.\n\nThe board is mutated in place.\n\"\"\"\nif start.x not in range(BOARD_SIZE) or start.y not in range(BOARD_SIZE):\nraise ValueError(\"The starting square is not inside the board.\")\n\nif end.x not in range(BOARD_SIZE) or end.y not in range(BOARD_SIZE):\nraise ValueError(\"The destination square is not inside the board.\")\n\nif start not in board:\nraise ValueError(\"There is no piece to be moved at the given starting location.\")\n\nif is_white(board[start]) != turn:\nraise ValueError(\"The current player is attempting to move an opponent's piece.\")\n\nif not is_valid_move(board[start], start, end, board):\nraise ValueError(\"The {} does not move in this manner (Or there is a piece blocking its path).\".format(board[start]))\n\nif end in board and is_white(board[start]) == is_white(board[end]):\nraise ValueError(\"The destination square is already occupied by a same color piece.\")\n\nboard[end] = board[start]\ndel board[start]\n\ndef is_valid_move(piece, start, end, board=board):\n\"\"\" Can the given piece move this way? \"\"\"\nreturn PIECE_TO_MOVE_VALIDATION[piece](start, end, board)\n\n\"\"\"\nPrompts the user for a square in chess coordinates and\nreturns a Point object indicating such square.\n\"\"\"\ngiven = input(prompt)\nif not (given in ALPHABET and given in \"12345678\"):\nprint(\"Invalid coordinates, [ex: b4, e6, a1, h8 ...]. Try again.\")\nreturn Point(ALPHABET.index(given), 8 - int(given))\n\ndef human_player(board, turn):\n\"\"\"\nPrompts a human player to make a move.\n\nAlso shows him the board to inform him about the\ncurrent game state and validates the move as\ndetailed in the main __doc__ section Legality checks\n\"\"\"\nprint(\"{}'s Turn.\\n\".format(\"White\" if turn else \"Black\"))\nprint_board(board)\nprint(\"\\n\\n\")\ntry:\nmake_move(board, start, end, turn)\nexcept ValueError as e:\nprint(\"Invalid move: {}\".format(e))\nhuman_player(board, turn)\n\ndef interact_with_board(board):\n\"\"\"\nAllows the players to play a game.\n\"\"\"\nfor turn, player in cycle(((True, human_player), (False, human_player))):\nplayer(board, turn)\n\nif __name__ == \"__main__\":\ninteract_with_board(board)\n\n• The idea and implementation looks pretty cool :)\n– ave\nJun 20, 2016 at 23:24\n• You might find github.com/thomasahle/sunfish interesting. Jun 21, 2016 at 9:44\n• The board looks so awesome! Jun 21, 2016 at 12:07\n\n# Repeated logic\n\nYou have an is_empty helper function that you don't use in make_move validation. You also check that the user input fits into the board both in ask_chess_coordinate and in make_move. You can keep the validation in ask_chess_coordinate an remove it from make_move since it makes more sense to warn about such error this early.\n\n# Recursion\n\nask_chess_coordinate and human_player both use recursion to handle illegal moves/positions. But I don't see an interest to that as you’re not modifying their parameters. Using an explicit loop feels better here:\n\ndef ask_chess_coordinate(prompt):\n\"\"\"\nPrompts the user for a square in chess coordinates and\nreturns a Point object indicating such square.\n\"\"\"\nwhile True:\ngiven = input(prompt)\nif not (given in ALPHABET and given in \"12345678\"):\nprint(\"Invalid coordinates, [ex: b4, e6, a1, h8 ...]. Try again.\")\nelse:\nreturn Point(ALPHABET.index(given), 8 - int(given))\n\ndef human_player(board, turn):\n\"\"\"\nPrompts a human player to make a move.\n\nAlso shows him the board to inform him about the\ncurrent game state and validates the move as\ndetailed in the main __doc__ section Legality checks\n\"\"\"\nwhile True:\nprint(\"{}'s Turn.\\n\".format(\"White\" if turn else \"Black\"))\nprint_board(board)\nprint(\"\\n\\n\")\ntry:\nmake_move(board, start, end, turn)\nexcept ValueError as e:\nprint(\"Invalid move: {}\".format(e))\nelse:\nbreak\n\n\n# Unpacking\n\nIt is my personal taste, but I find unpacking sexier than indexing. You can use it in various places:\n\n• ask_chess_coordinates (even though it makes it a bit more verbose :/)\n\ndef ask_chess_coordinate(prompt):\n\"\"\"\nPrompts the user for a square in chess coordinates and\nreturns a Point object indicating such square.\n\"\"\"\nwhile True:\ntry:\nx, y = input(prompt)\ny = 8 - int(y)\nexcept ValueError:\nprint(\"Invalid format. Expecting a letter and a digit [ex: b4, e6, a1, h8 ...].\")\nelse:\nif x not in ALPHABET and y not in range(BOARD_SIZE):\nprint(\"Coordinates out of bounds. Try again.\")\nelse:\nreturn Point(ALPHABET.index(x), y)\n\n• bishop_move:\n\nintermediates = list(takewhile(lambda x: x != end, (Point(start.x + x, start.y + y) for x, y in ps)))\n\n• legal_by_delta:\n\nreturn end in (Point(start.x + x, start.y + y) for x, y in deltas)\n\n\nYou get the point.\n\n# sign\n\nimport math\n\ndef sign(x):\nreturn int(math.copysign(1, x))\n\n\ndef bishop_move(start, end, board):\n\"\"\"\nCan a bishop move from start to end?\n\"\"\"\ndelta_x = sign(end.x - start.x)\ndelta_y = sign(end.y - start.y)\nps = ((delta_x * i, delta_y * i) for i in range(1, BOARD_SIZE))\n\nintermediates = takewhile(end.__ne__, (Point(start.x + x, start.y + y) for x, y in ps))\nreturn bishop_move_ignoring_obstruction(start, end) and all(is_empty(s, board) for s in intermediates)\n\n\n(I also changed the lambda to propose an alternative and removed converting intermediates to a list as you don't need it.)\n\ndef rook_move(start, end, board):\n\"\"\"\nCan a rook move from start to end?\n\nAlso checks if a piece blocks the path.\n\"\"\"\ndef r(a, b):\ndirection = sign(b - a)\nreturn range(a + direction, b, direction)\n\nif start.x == end.x:\nintermediates = (Point(start.x, y) for y in r(start.y, end.y))\nif start.y == end.y:\nintermediates = (Point(x, start.y) for x in r(start.x, end.x))\n\nreturn rook_move_ignoring_obstruction(start, end) and all(is_empty(s, board) for s in intermediates)\n\n\nBy the way, your function had a bug. Try printing the list of intermediates positions instead of returning something and call it with rook_move(Point(3,4), Point(3, 1), None) ;)\n\n# TODO list\n\nYou should add pawn promotion to your list, probably before castling or en-passant, but after limiting moves for checks (because you need that to check for end of game).\n\nGiven the amount of functions that take the board as parameter, you may want to define a class instead. Or at least:\n\nif __name__ == \"__main__\":\ninteract_with_board(board.copy())\n\n\nto easily restart games.\n\nThis is pretty cool! I like that you're using the unicode. I do have some points about readability/usability though.\n\nYou have DELTAS constants, but they're all defined within functions, instead I would make these constants global. That would mean they're all defined in the same place, and you could maybe even have a dictionary of DELTAS with keys for 'knight', 'king' etc. I also think you should store all those deltas as points rather than just pairs of co-ordinates that you need to turn into points.\n\nAlso named tuples are great and handy, but a custom class offers you more options. You could set up a Point class so that you could just sum them directly. By defining an __add__ method you can just sum two points with Point() + Point().\n\nclass Point:\ndef __init__(self, x, y):\nself.x = x\nself.y = y\n\nreturn Point(self.x + other.x, self.y + other.y)\n\ndef __repr__(self):\nreturn \"Point(x={}, y={})\".format(self.x, self.y)\n\n\nBy combining those options, you can make a much more simple legal_by_deltas function:\n\ndef legal_by_deltas(start, end, deltas):\n\"\"\"\nGiven start and end position of a piece that moves by fixed (x, y) deltas,\nreturns if the end is reachable legally.\n\"\"\"\nreturn end in (start + delta for delta in deltas)\n\n\nYou could even expand Point further with an __eq__ function that can handle Point() == Point(), which would do away with the need for rook_move_ignoring_obstruction.\n\nIn general, I think you have too many little functions, it would be more readable to me to avoid abstracting all the time. is_empty doesn't make sense to me, when you could just use the test directly. Compare:\n\nall(is_empty(s, board) for s in intermediates)\n\n\nto\n\nall(square not in board for square in intermediates)\n\n• It is even possible to define class Point(namedtuple('Point', 'x y')) to get the best of both worlds (__eq__ and __repr__ are already defined by namedtuple). Jun 21, 2016 at 9:32\n\nSince this mostly looks like it is good shape, I hate to suggest it, but I would strongly consider using a list of lists for your board. There are several advantages to this approach.\n\n1. The board size is fixed, so all moves would still be O(1) as they are with a dictionary (the main reason for dicts as opposed to lists)\n2. This allows the syntax board[row][col] which is arguably nicer.\n3. You get the ability to write board[row] and [board[i][col] for i in range(8)] for quite quick ways of obtaining a row or column of the board: something that I'd imagine would be useful for AI stuff.\n4. Lists have faster access as indexes are just memory jumps rather than a hash function and a memory jump\n\nlastly, a suggestion for how to implement check: after a move is made, check if your king is in check, and if so say that the move is illegal. This is preferable to checking if the king is in check first because it will also prevent people from moving a piece that was in-between an enemy piece and their king. Overall, this is a really cool project, and half of the reason I'm making suggestions is to see what happens with it.\n\n• You may use [BACKQUOTE] to highlight code. Good answer, welcome to Codereview. Jun 24, 2016 at 14:44\n• I also just realized that the last 3 lines of move_pawn` are redundant. The function returns True no matter what happens with the if statement. Jun 25, 2016 at 20:19"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6997065,"math_prob":0.97139174,"size":11023,"snap":"2023-14-2023-23","text_gpt3_token_len":3569,"char_repetition_ratio":0.14611126,"word_repetition_ratio":0.34040353,"special_character_ratio":0.34110495,"punctuation_ratio":0.22008733,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97471577,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T14:37:48Z\",\"WARC-Record-ID\":\"<urn:uuid:b52af919-7ab1-4027-bc25-414eb169048c>\",\"Content-Length\":\"200197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:82108641-ecb9-4f85-b59d-15f2db7344a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0e9bfcd-0726-42aa-8f7f-b48755c9834f>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/132582/unicode-chess-pvp-with-move-validation\",\"WARC-Payload-Digest\":\"sha1:JOJBODIN2TUKJI6IJDNNHUSCYWAL5JP3\",\"WARC-Block-Digest\":\"sha1:I5T2QGWPXG6LS2CRBNQC5JRNCOPVKM6T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224645810.57_warc_CC-MAIN-20230530131531-20230530161531-00690.warc.gz\"}"} |
https://www.colorhexa.com/038267 | [
"# #038267 Color Information\n\nIn a RGB color space, hex #038267 is composed of 1.2% red, 51% green and 40.4% blue. Whereas in a CMYK color space, it is composed of 97.7% cyan, 0% magenta, 20.8% yellow and 49% black. It has a hue angle of 167.2 degrees, a saturation of 95.5% and a lightness of 26.1%. #038267 color hex could be obtained by blending #06ffce with #000500. Closest websafe color is: #009966.\n\n• R 1\n• G 51\n• B 40\nRGB color chart\n• C 98\n• M 0\n• Y 21\n• K 49\nCMYK color chart\n\n#038267 color description : Dark cyan.\n\n# #038267 Color Conversion\n\nThe hexadecimal color #038267 has RGB values of R:3, G:130, B:103 and CMYK values of C:0.98, M:0, Y:0.21, K:0.49. Its decimal value is 229991.\n\nHex triplet RGB Decimal 038267 `#038267` 3, 130, 103 `rgb(3,130,103)` 1.2, 51, 40.4 `rgb(1.2%,51%,40.4%)` 98, 0, 21, 49 167.2°, 95.5, 26.1 `hsl(167.2,95.5%,26.1%)` 167.2°, 97.7, 51 009966 `#009966`\nCIE-LAB 48.213, -37.116, 6.163 10.467, 16.963, 15.554 0.244, 0.395, 16.963 48.213, 37.624, 170.572 48.213, -39.773, 13.569 41.186, -26.709, 6.44 00000011, 10000010, 01100111\n\n# Color Schemes with #038267\n\n• #038267\n``#038267` `rgb(3,130,103)``\n• #82031e\n``#82031e` `rgb(130,3,30)``\nComplementary Color\n• #038228\n``#038228` `rgb(3,130,40)``\n• #038267\n``#038267` `rgb(3,130,103)``\n• #035d82\n``#035d82` `rgb(3,93,130)``\nAnalogous Color\n• #822803\n``#822803` `rgb(130,40,3)``\n• #038267\n``#038267` `rgb(3,130,103)``\n• #82035d\n``#82035d` `rgb(130,3,93)``\nSplit Complementary Color\n• #826703\n``#826703` `rgb(130,103,3)``\n• #038267\n``#038267` `rgb(3,130,103)``\n• #670382\n``#670382` `rgb(103,3,130)``\n• #1e8203\n``#1e8203` `rgb(30,130,3)``\n• #038267\n``#038267` `rgb(3,130,103)``\n• #670382\n``#670382` `rgb(103,3,130)``\n• #82031e\n``#82031e` `rgb(130,3,30)``\n• #01372c\n``#01372c` `rgb(1,55,44)``\n• #025040\n``#025040` `rgb(2,80,64)``\n• #026953\n``#026953` `rgb(2,105,83)``\n• #038267\n``#038267` `rgb(3,130,103)``\n• #049b7b\n``#049b7b` `rgb(4,155,123)``\n• #04b48e\n``#04b48e` `rgb(4,180,142)``\n• #05cda2\n``#05cda2` `rgb(5,205,162)``\nMonochromatic Color\n\n# Alternatives to #038267\n\nBelow, you can see some colors close to #038267. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #038247\n``#038247` `rgb(3,130,71)``\n• #038252\n``#038252` `rgb(3,130,82)``\n• #03825c\n``#03825c` `rgb(3,130,92)``\n• #038267\n``#038267` `rgb(3,130,103)``\n• #038272\n``#038272` `rgb(3,130,114)``\n• #03827c\n``#03827c` `rgb(3,130,124)``\n• #037d82\n``#037d82` `rgb(3,125,130)``\nSimilar Colors\n\n# #038267 Preview\n\nThis text has a font color of #038267.\n\n``<span style=\"color:#038267;\">Text here</span>``\n#038267 background color\n\nThis paragraph has a background color of #038267.\n\n``<p style=\"background-color:#038267;\">Content here</p>``\n#038267 border color\n\nThis element has a border color of #038267.\n\n``<div style=\"border:1px solid #038267;\">Content here</div>``\nCSS codes\n``.text {color:#038267;}``\n``.background {background-color:#038267;}``\n``.border {border:1px solid #038267;}``\n\n# Shades and Tints of #038267\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000f0c is the darkest color, while #fbfffe is the lightest one.\n\n• #000f0c\n``#000f0c` `rgb(0,15,12)``\n• #01221b\n``#01221b` `rgb(1,34,27)``\n• #01352a\n``#01352a` `rgb(1,53,42)``\n• #024839\n``#024839` `rgb(2,72,57)``\n• #025c49\n``#025c49` `rgb(2,92,73)``\n• #036f58\n``#036f58` `rgb(3,111,88)``\n• #038267\n``#038267` `rgb(3,130,103)``\n• #039576\n``#039576` `rgb(3,149,118)``\n• #04a885\n``#04a885` `rgb(4,168,133)``\n• #04bc95\n``#04bc95` `rgb(4,188,149)``\n• #05cfa4\n``#05cfa4` `rgb(5,207,164)``\n• #05e2b3\n``#05e2b3` `rgb(5,226,179)``\n• #06f5c2\n``#06f5c2` `rgb(6,245,194)``\n• #15fac9\n``#15fac9` `rgb(21,250,201)``\n• #28facd\n``#28facd` `rgb(40,250,205)``\n``#3bfad2` `rgb(59,250,210)``\n• #4efbd6\n``#4efbd6` `rgb(78,251,214)``\n• #61fbdb\n``#61fbdb` `rgb(97,251,219)``\n• #75fcdf\n``#75fcdf` `rgb(117,252,223)``\n• #88fce3\n``#88fce3` `rgb(136,252,227)``\n• #9bfde8\n``#9bfde8` `rgb(155,253,232)``\n• #aefdec\n``#aefdec` `rgb(174,253,236)``\n• #c1fef1\n``#c1fef1` `rgb(193,254,241)``\n• #d4fef5\n``#d4fef5` `rgb(212,254,245)``\n• #e8fefa\n``#e8fefa` `rgb(232,254,250)``\n• #fbfffe\n``#fbfffe` `rgb(251,255,254)``\nTint Color Variation\n\n# Tones of #038267\n\nA tone is produced by adding gray to any pure hue. In this case, #404544 is the less saturated color, while #038267 is the most saturated one.\n\n• #404544\n``#404544` `rgb(64,69,68)``\n• #3b4a47\n``#3b4a47` `rgb(59,74,71)``\n• #364f4a\n``#364f4a` `rgb(54,79,74)``\n• #31544d\n``#31544d` `rgb(49,84,77)``\n• #2c594f\n``#2c594f` `rgb(44,89,79)``\n• #275e52\n``#275e52` `rgb(39,94,82)``\n• #226355\n``#226355` `rgb(34,99,85)``\n• #1d6858\n``#1d6858` `rgb(29,104,88)``\n• #176e5b\n``#176e5b` `rgb(23,110,91)``\n• #12735e\n``#12735e` `rgb(18,115,94)``\n• #0d7861\n``#0d7861` `rgb(13,120,97)``\n• #087d64\n``#087d64` `rgb(8,125,100)``\n• #038267\n``#038267` `rgb(3,130,103)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #038267 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5368268,"math_prob":0.7693532,"size":3664,"snap":"2019-43-2019-47","text_gpt3_token_len":1578,"char_repetition_ratio":0.124863386,"word_repetition_ratio":0.011111111,"special_character_ratio":0.56877726,"punctuation_ratio":0.23516238,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99358577,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-13T10:15:58Z\",\"WARC-Record-ID\":\"<urn:uuid:0afc0e6f-2dc0-48d8-9d24-3597335becdb>\",\"Content-Length\":\"36225\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:33e4761d-d5ef-4234-8151-ab7a9b7b698d>\",\"WARC-Concurrent-To\":\"<urn:uuid:9298b88b-ab6c-4437-ac73-f3f4ee239a82>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/038267\",\"WARC-Payload-Digest\":\"sha1:KUPVMWNM4BQ2OBOC2GHNE3CW2V4NWYKH\",\"WARC-Block-Digest\":\"sha1:FBRX66QN6OOL5P2PWTVIDPKZC4CI4BBG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496667177.24_warc_CC-MAIN-20191113090217-20191113114217-00328.warc.gz\"}"} |
https://petunia100.savingadvice.com/2017/10/ | [
"User Real IP - 52.91.0.112\nArray\n(\n => Array\n(\n => 182.68.68.92\n)\n\n => Array\n(\n => 101.0.41.201\n)\n\n => Array\n(\n => 43.225.98.123\n)\n\n => Array\n(\n => 2.58.194.139\n)\n\n => Array\n(\n => 46.119.197.104\n)\n\n => Array\n(\n => 45.249.8.93\n)\n\n => Array\n(\n => 103.12.135.72\n)\n\n => Array\n(\n => 157.35.243.216\n)\n\n => Array\n(\n => 209.107.214.176\n)\n\n => Array\n(\n => 5.181.233.166\n)\n\n => Array\n(\n => 106.201.10.100\n)\n\n => Array\n(\n => 36.90.55.39\n)\n\n => Array\n(\n => 119.154.138.47\n)\n\n => Array\n(\n => 51.91.31.157\n)\n\n => Array\n(\n => 182.182.65.216\n)\n\n => Array\n(\n => 157.35.252.63\n)\n\n => Array\n(\n => 14.142.34.163\n)\n\n => Array\n(\n => 178.62.43.135\n)\n\n => Array\n(\n => 43.248.152.148\n)\n\n => Array\n(\n => 222.252.104.114\n)\n\n => Array\n(\n => 209.107.214.168\n)\n\n => Array\n(\n => 103.99.199.250\n)\n\n => Array\n(\n => 178.62.72.160\n)\n\n => Array\n(\n => 27.6.1.170\n)\n\n => Array\n(\n => 182.69.249.219\n)\n\n => Array\n(\n => 110.93.228.86\n)\n\n => Array\n(\n => 72.255.1.98\n)\n\n => Array\n(\n => 182.73.111.98\n)\n\n => Array\n(\n => 45.116.117.11\n)\n\n => Array\n(\n => 122.15.78.189\n)\n\n => Array\n(\n => 14.167.188.234\n)\n\n => Array\n(\n => 223.190.4.202\n)\n\n => Array\n(\n => 202.173.125.19\n)\n\n => Array\n(\n => 103.255.5.32\n)\n\n => Array\n(\n => 39.37.145.103\n)\n\n => Array\n(\n => 140.213.26.249\n)\n\n => Array\n(\n => 45.118.166.85\n)\n\n => Array\n(\n => 102.166.138.255\n)\n\n => Array\n(\n => 77.111.246.234\n)\n\n => Array\n(\n => 45.63.6.196\n)\n\n => Array\n(\n => 103.250.147.115\n)\n\n => Array\n(\n => 223.185.30.99\n)\n\n => Array\n(\n => 103.122.168.108\n)\n\n => Array\n(\n => 123.136.203.21\n)\n\n => Array\n(\n => 171.229.243.63\n)\n\n => Array\n(\n => 153.149.98.149\n)\n\n => Array\n(\n => 223.238.93.15\n)\n\n => Array\n(\n => 178.62.113.166\n)\n\n => Array\n(\n => 101.162.0.153\n)\n\n => Array\n(\n => 121.200.62.114\n)\n\n => Array\n(\n => 14.248.77.252\n)\n\n => Array\n(\n => 95.142.117.29\n)\n\n => Array\n(\n => 150.129.60.107\n)\n\n => Array\n(\n => 94.205.243.22\n)\n\n => Array\n(\n => 115.42.71.143\n)\n\n => Array\n(\n => 117.217.195.59\n)\n\n => Array\n(\n => 182.77.112.56\n)\n\n => Array\n(\n => 182.77.112.108\n)\n\n => Array\n(\n => 41.80.69.10\n)\n\n => Array\n(\n => 117.5.222.121\n)\n\n => Array\n(\n => 103.11.0.38\n)\n\n => Array\n(\n => 202.173.127.140\n)\n\n => Array\n(\n => 49.249.249.50\n)\n\n => Array\n(\n => 116.72.198.211\n)\n\n => Array\n(\n => 223.230.54.53\n)\n\n => Array\n(\n => 102.69.228.74\n)\n\n => Array\n(\n => 39.37.251.89\n)\n\n => Array\n(\n => 39.53.246.141\n)\n\n => Array\n(\n => 39.57.182.72\n)\n\n => Array\n(\n => 209.58.130.210\n)\n\n => Array\n(\n => 104.131.75.86\n)\n\n => Array\n(\n => 106.212.131.255\n)\n\n => Array\n(\n => 106.212.132.127\n)\n\n => Array\n(\n => 223.190.4.60\n)\n\n => Array\n(\n => 103.252.116.252\n)\n\n => Array\n(\n => 103.76.55.182\n)\n\n => Array\n(\n => 45.118.166.70\n)\n\n => Array\n(\n => 103.93.174.215\n)\n\n => Array\n(\n => 5.62.62.142\n)\n\n => Array\n(\n => 182.179.158.156\n)\n\n => Array\n(\n => 39.57.255.12\n)\n\n => Array\n(\n => 39.37.178.37\n)\n\n => Array\n(\n => 182.180.165.211\n)\n\n => Array\n(\n => 119.153.135.17\n)\n\n => Array\n(\n => 72.255.15.244\n)\n\n => Array\n(\n => 139.180.166.181\n)\n\n => Array\n(\n => 70.119.147.111\n)\n\n => Array\n(\n => 106.210.40.83\n)\n\n => Array\n(\n => 14.190.70.91\n)\n\n => Array\n(\n => 202.125.156.82\n)\n\n => Array\n(\n => 115.42.68.38\n)\n\n => Array\n(\n => 102.167.13.108\n)\n\n => Array\n(\n => 117.217.192.130\n)\n\n => Array\n(\n => 205.185.223.156\n)\n\n => Array\n(\n => 171.224.180.29\n)\n\n => Array\n(\n => 45.127.45.68\n)\n\n => Array\n(\n => 195.206.183.232\n)\n\n => Array\n(\n => 49.32.52.115\n)\n\n => Array\n(\n => 49.207.49.223\n)\n\n => Array\n(\n => 45.63.29.61\n)\n\n => Array\n(\n => 103.245.193.214\n)\n\n => Array\n(\n => 39.40.236.69\n)\n\n => Array\n(\n => 62.80.162.111\n)\n\n => Array\n(\n => 45.116.232.56\n)\n\n => Array\n(\n => 45.118.166.91\n)\n\n => Array\n(\n => 180.92.230.234\n)\n\n => Array\n(\n => 157.40.57.160\n)\n\n => Array\n(\n => 110.38.38.130\n)\n\n => Array\n(\n => 72.255.57.183\n)\n\n => Array\n(\n => 182.68.81.85\n)\n\n => Array\n(\n => 39.57.202.122\n)\n\n => Array\n(\n => 119.152.154.36\n)\n\n => Array\n(\n => 5.62.62.141\n)\n\n => Array\n(\n => 119.155.54.232\n)\n\n => Array\n(\n => 39.37.141.22\n)\n\n => Array\n(\n => 183.87.12.225\n)\n\n => Array\n(\n => 107.170.127.117\n)\n\n => Array\n(\n => 125.63.124.49\n)\n\n => Array\n(\n => 39.42.191.3\n)\n\n => Array\n(\n => 116.74.24.72\n)\n\n => Array\n(\n => 46.101.89.227\n)\n\n => Array\n(\n => 202.173.125.247\n)\n\n => Array\n(\n => 39.42.184.254\n)\n\n => Array\n(\n => 115.186.165.132\n)\n\n => Array\n(\n => 39.57.206.126\n)\n\n => Array\n(\n => 103.245.13.145\n)\n\n => Array\n(\n => 202.175.246.43\n)\n\n => Array\n(\n => 192.140.152.150\n)\n\n => Array\n(\n => 202.88.250.103\n)\n\n => Array\n(\n => 103.248.94.207\n)\n\n => Array\n(\n => 77.73.66.101\n)\n\n => Array\n(\n => 104.131.66.8\n)\n\n => Array\n(\n => 113.186.161.97\n)\n\n => Array\n(\n => 222.254.5.7\n)\n\n => Array\n(\n => 223.233.67.247\n)\n\n => Array\n(\n => 171.249.116.146\n)\n\n => Array\n(\n => 47.30.209.71\n)\n\n => Array\n(\n => 202.134.13.130\n)\n\n => Array\n(\n => 27.6.135.7\n)\n\n => Array\n(\n => 107.170.186.79\n)\n\n => Array\n(\n => 103.212.89.171\n)\n\n => Array\n(\n => 117.197.9.77\n)\n\n => Array\n(\n => 122.176.206.233\n)\n\n => Array\n(\n => 192.227.253.222\n)\n\n => Array\n(\n => 182.188.224.119\n)\n\n => Array\n(\n => 14.248.70.74\n)\n\n => Array\n(\n => 42.118.219.169\n)\n\n => Array\n(\n => 110.39.146.170\n)\n\n => Array\n(\n => 119.160.66.143\n)\n\n => Array\n(\n => 103.248.95.130\n)\n\n => Array\n(\n => 27.63.152.208\n)\n\n => Array\n(\n => 49.207.114.96\n)\n\n => Array\n(\n => 102.166.23.214\n)\n\n => Array\n(\n => 175.107.254.73\n)\n\n => Array\n(\n => 103.10.227.214\n)\n\n => Array\n(\n => 202.143.115.89\n)\n\n => Array\n(\n => 110.93.227.187\n)\n\n => Array\n(\n => 103.140.31.60\n)\n\n => Array\n(\n => 110.37.231.46\n)\n\n => Array\n(\n => 39.36.99.238\n)\n\n => Array\n(\n => 157.37.140.26\n)\n\n => Array\n(\n => 43.246.202.226\n)\n\n => Array\n(\n => 137.97.8.143\n)\n\n => Array\n(\n => 182.65.52.242\n)\n\n => Array\n(\n => 115.42.69.62\n)\n\n => Array\n(\n => 14.143.254.58\n)\n\n => Array\n(\n => 223.179.143.236\n)\n\n => Array\n(\n => 223.179.143.249\n)\n\n => Array\n(\n => 103.143.7.54\n)\n\n => Array\n(\n => 223.179.139.106\n)\n\n => Array\n(\n => 39.40.219.90\n)\n\n => Array\n(\n => 45.115.141.231\n)\n\n => Array\n(\n => 120.29.100.33\n)\n\n => Array\n(\n => 112.196.132.5\n)\n\n => Array\n(\n => 202.163.123.153\n)\n\n => Array\n(\n => 5.62.58.146\n)\n\n => Array\n(\n => 39.53.216.113\n)\n\n => Array\n(\n => 42.111.160.73\n)\n\n => Array\n(\n => 107.182.231.213\n)\n\n => Array\n(\n => 119.82.94.120\n)\n\n => Array\n(\n => 178.62.34.82\n)\n\n => Array\n(\n => 203.122.6.18\n)\n\n => Array\n(\n => 157.42.38.251\n)\n\n => Array\n(\n => 45.112.68.222\n)\n\n => Array\n(\n => 49.206.212.122\n)\n\n => Array\n(\n => 104.236.70.228\n)\n\n => Array\n(\n => 42.111.34.243\n)\n\n => Array\n(\n => 84.241.19.186\n)\n\n => Array\n(\n => 89.187.180.207\n)\n\n => Array\n(\n => 104.243.212.118\n)\n\n => Array\n(\n => 104.236.55.136\n)\n\n => Array\n(\n => 106.201.16.163\n)\n\n => Array\n(\n => 46.101.40.25\n)\n\n => Array\n(\n => 45.118.166.94\n)\n\n => Array\n(\n => 49.36.128.102\n)\n\n => Array\n(\n => 14.142.193.58\n)\n\n => Array\n(\n => 212.79.124.176\n)\n\n => Array\n(\n => 45.32.191.194\n)\n\n => Array\n(\n => 105.112.107.46\n)\n\n => Array\n(\n => 106.201.14.8\n)\n\n => Array\n(\n => 110.93.240.65\n)\n\n => Array\n(\n => 27.96.95.177\n)\n\n => Array\n(\n => 45.41.134.35\n)\n\n => Array\n(\n => 180.151.13.110\n)\n\n => Array\n(\n => 101.53.242.89\n)\n\n => Array\n(\n => 115.186.3.110\n)\n\n => Array\n(\n => 171.49.185.242\n)\n\n => Array\n(\n => 115.42.70.24\n)\n\n => Array\n(\n => 45.128.188.43\n)\n\n => Array\n(\n => 103.140.129.63\n)\n\n => Array\n(\n => 101.50.113.147\n)\n\n => Array\n(\n => 103.66.73.30\n)\n\n => Array\n(\n => 117.247.193.169\n)\n\n => Array\n(\n => 120.29.100.94\n)\n\n => Array\n(\n => 42.109.154.39\n)\n\n => Array\n(\n => 122.173.155.150\n)\n\n => Array\n(\n => 45.115.104.53\n)\n\n => Array\n(\n => 116.74.29.84\n)\n\n => Array\n(\n => 101.50.125.34\n)\n\n => Array\n(\n => 45.118.166.80\n)\n\n => Array\n(\n => 91.236.184.27\n)\n\n => Array\n(\n => 113.167.185.120\n)\n\n => Array\n(\n => 27.97.66.222\n)\n\n => Array\n(\n => 43.247.41.117\n)\n\n => Array\n(\n => 23.229.16.227\n)\n\n => Array\n(\n => 14.248.79.209\n)\n\n => Array\n(\n => 117.5.194.26\n)\n\n => Array\n(\n => 117.217.205.41\n)\n\n => Array\n(\n => 114.79.169.99\n)\n\n => Array\n(\n => 103.55.60.97\n)\n\n => Array\n(\n => 182.75.89.210\n)\n\n => Array\n(\n => 77.73.66.109\n)\n\n => Array\n(\n => 182.77.126.139\n)\n\n => Array\n(\n => 14.248.77.166\n)\n\n => Array\n(\n => 157.35.224.133\n)\n\n => Array\n(\n => 183.83.38.27\n)\n\n => Array\n(\n => 182.68.4.77\n)\n\n => Array\n(\n => 122.177.130.234\n)\n\n => Array\n(\n => 103.24.99.99\n)\n\n => Array\n(\n => 103.91.127.66\n)\n\n => Array\n(\n => 41.90.34.240\n)\n\n => Array\n(\n => 49.205.77.102\n)\n\n => Array\n(\n => 103.248.94.142\n)\n\n => Array\n(\n => 104.143.92.170\n)\n\n => Array\n(\n => 219.91.157.114\n)\n\n => Array\n(\n => 223.190.88.22\n)\n\n => Array\n(\n => 223.190.86.232\n)\n\n => Array\n(\n => 39.41.172.80\n)\n\n => Array\n(\n => 124.107.206.5\n)\n\n => Array\n(\n => 139.167.180.224\n)\n\n => Array\n(\n => 93.76.64.248\n)\n\n => Array\n(\n => 65.216.227.119\n)\n\n => Array\n(\n => 223.190.119.141\n)\n\n => Array\n(\n => 110.93.237.179\n)\n\n => Array\n(\n => 41.90.7.85\n)\n\n => Array\n(\n => 103.100.6.26\n)\n\n => Array\n(\n => 104.140.83.13\n)\n\n => Array\n(\n => 223.190.119.133\n)\n\n => Array\n(\n => 119.152.150.87\n)\n\n => Array\n(\n => 103.125.130.147\n)\n\n => Array\n(\n => 27.6.5.52\n)\n\n => Array\n(\n => 103.98.188.26\n)\n\n => Array\n(\n => 39.35.121.81\n)\n\n => Array\n(\n => 74.119.146.182\n)\n\n => Array\n(\n => 5.181.233.162\n)\n\n => Array\n(\n => 157.39.18.60\n)\n\n => Array\n(\n => 1.187.252.25\n)\n\n => Array\n(\n => 39.42.145.59\n)\n\n => Array\n(\n => 39.35.39.198\n)\n\n => Array\n(\n => 49.36.128.214\n)\n\n => Array\n(\n => 182.190.20.56\n)\n\n => Array\n(\n => 122.180.249.189\n)\n\n => Array\n(\n => 117.217.203.107\n)\n\n => Array\n(\n => 103.70.82.241\n)\n\n => Array\n(\n => 45.118.166.68\n)\n\n => Array\n(\n => 122.180.168.39\n)\n\n => Array\n(\n => 149.28.67.254\n)\n\n => Array\n(\n => 223.233.73.8\n)\n\n => Array\n(\n => 122.167.140.0\n)\n\n => Array\n(\n => 95.158.51.55\n)\n\n => Array\n(\n => 27.96.95.134\n)\n\n => Array\n(\n => 49.206.214.53\n)\n\n => Array\n(\n => 212.103.49.92\n)\n\n => Array\n(\n => 122.177.115.101\n)\n\n => Array\n(\n => 171.50.187.124\n)\n\n => Array\n(\n => 122.164.55.107\n)\n\n => Array\n(\n => 98.114.217.204\n)\n\n => Array\n(\n => 106.215.10.54\n)\n\n => Array\n(\n => 115.42.68.28\n)\n\n => Array\n(\n => 104.194.220.87\n)\n\n => Array\n(\n => 103.137.84.170\n)\n\n => Array\n(\n => 61.16.142.110\n)\n\n => Array\n(\n => 212.103.49.85\n)\n\n => Array\n(\n => 39.53.248.162\n)\n\n => Array\n(\n => 203.122.40.214\n)\n\n => Array\n(\n => 117.217.198.72\n)\n\n => Array\n(\n => 115.186.191.203\n)\n\n => Array\n(\n => 120.29.100.199\n)\n\n => Array\n(\n => 45.151.237.24\n)\n\n => Array\n(\n => 223.190.125.232\n)\n\n => Array\n(\n => 41.80.151.17\n)\n\n => Array\n(\n => 23.111.188.5\n)\n\n => Array\n(\n => 223.190.125.216\n)\n\n => Array\n(\n => 103.217.133.119\n)\n\n => Array\n(\n => 103.198.173.132\n)\n\n => Array\n(\n => 47.31.155.89\n)\n\n => Array\n(\n => 223.190.20.253\n)\n\n => Array\n(\n => 104.131.92.125\n)\n\n => Array\n(\n => 223.190.19.152\n)\n\n => Array\n(\n => 103.245.193.191\n)\n\n => Array\n(\n => 106.215.58.255\n)\n\n => Array\n(\n => 119.82.83.238\n)\n\n => Array\n(\n => 106.212.128.138\n)\n\n => Array\n(\n => 139.167.237.36\n)\n\n => Array\n(\n => 222.124.40.250\n)\n\n => Array\n(\n => 134.56.185.169\n)\n\n => Array\n(\n => 54.255.226.31\n)\n\n => Array\n(\n => 137.97.162.31\n)\n\n => Array\n(\n => 95.185.21.191\n)\n\n => Array\n(\n => 171.61.168.151\n)\n\n => Array\n(\n => 137.97.184.4\n)\n\n => Array\n(\n => 106.203.151.202\n)\n\n => Array\n(\n => 39.37.137.0\n)\n\n => Array\n(\n => 45.118.166.66\n)\n\n => Array\n(\n => 14.248.105.100\n)\n\n => Array\n(\n => 106.215.61.185\n)\n\n => Array\n(\n => 202.83.57.179\n)\n\n => Array\n(\n => 89.187.182.176\n)\n\n => Array\n(\n => 49.249.232.198\n)\n\n => Array\n(\n => 132.154.95.236\n)\n\n => Array\n(\n => 223.233.83.230\n)\n\n => Array\n(\n => 183.83.153.14\n)\n\n => Array\n(\n => 125.63.72.210\n)\n\n => Array\n(\n => 207.174.202.11\n)\n\n => Array\n(\n => 119.95.88.59\n)\n\n => Array\n(\n => 122.170.14.150\n)\n\n => Array\n(\n => 45.118.166.75\n)\n\n => Array\n(\n => 103.12.135.37\n)\n\n => Array\n(\n => 49.207.120.225\n)\n\n => Array\n(\n => 182.64.195.207\n)\n\n => Array\n(\n => 103.99.37.16\n)\n\n => Array\n(\n => 46.150.104.221\n)\n\n => Array\n(\n => 104.236.195.147\n)\n\n => Array\n(\n => 103.104.192.43\n)\n\n => Array\n(\n => 24.242.159.118\n)\n\n => Array\n(\n => 39.42.179.143\n)\n\n => Array\n(\n => 111.93.58.131\n)\n\n => Array\n(\n => 193.176.84.127\n)\n\n => Array\n(\n => 209.58.142.218\n)\n\n => Array\n(\n => 69.243.152.129\n)\n\n => Array\n(\n => 117.97.131.249\n)\n\n => Array\n(\n => 103.230.180.89\n)\n\n => Array\n(\n => 106.212.170.192\n)\n\n => Array\n(\n => 171.224.180.95\n)\n\n => Array\n(\n => 158.222.11.87\n)\n\n => Array\n(\n => 119.155.60.246\n)\n\n => Array\n(\n => 41.90.43.129\n)\n\n => Array\n(\n => 185.183.104.170\n)\n\n => Array\n(\n => 14.248.67.65\n)\n\n => Array\n(\n => 117.217.205.82\n)\n\n => Array\n(\n => 111.88.7.209\n)\n\n => Array\n(\n => 49.36.132.244\n)\n\n => Array\n(\n => 171.48.40.2\n)\n\n => Array\n(\n => 119.81.105.2\n)\n\n => Array\n(\n => 49.36.128.114\n)\n\n => Array\n(\n => 213.200.31.93\n)\n\n => Array\n(\n => 2.50.15.110\n)\n\n => Array\n(\n => 120.29.104.67\n)\n\n => Array\n(\n => 223.225.32.221\n)\n\n => Array\n(\n => 14.248.67.195\n)\n\n => Array\n(\n => 119.155.36.13\n)\n\n => Array\n(\n => 101.50.95.104\n)\n\n => Array\n(\n => 104.236.205.233\n)\n\n => Array\n(\n => 122.164.36.150\n)\n\n => Array\n(\n => 157.45.93.209\n)\n\n => Array\n(\n => 182.77.118.100\n)\n\n => Array\n(\n => 182.74.134.218\n)\n\n => Array\n(\n => 183.82.128.146\n)\n\n => Array\n(\n => 112.196.170.234\n)\n\n => Array\n(\n => 122.173.230.178\n)\n\n => Array\n(\n => 122.164.71.199\n)\n\n => Array\n(\n => 51.79.19.31\n)\n\n => Array\n(\n => 58.65.222.20\n)\n\n => Array\n(\n => 103.27.203.97\n)\n\n => Array\n(\n => 111.88.7.242\n)\n\n => Array\n(\n => 14.171.232.77\n)\n\n => Array\n(\n => 46.101.22.182\n)\n\n => Array\n(\n => 103.94.219.19\n)\n\n => Array\n(\n => 139.190.83.30\n)\n\n => Array\n(\n => 223.190.27.184\n)\n\n => Array\n(\n => 182.185.183.34\n)\n\n => Array\n(\n => 91.74.181.242\n)\n\n => Array\n(\n => 222.252.107.14\n)\n\n => Array\n(\n => 137.97.8.28\n)\n\n => Array\n(\n => 46.101.16.229\n)\n\n => Array\n(\n => 122.53.254.229\n)\n\n => Array\n(\n => 106.201.17.180\n)\n\n => Array\n(\n => 123.24.170.129\n)\n\n => Array\n(\n => 182.185.180.79\n)\n\n => Array\n(\n => 223.190.17.4\n)\n\n => Array\n(\n => 213.108.105.1\n)\n\n => Array\n(\n => 171.22.76.9\n)\n\n => Array\n(\n => 202.66.178.164\n)\n\n => Array\n(\n => 178.62.97.171\n)\n\n => Array\n(\n => 167.179.110.209\n)\n\n => Array\n(\n => 223.230.147.172\n)\n\n => Array\n(\n => 76.218.195.160\n)\n\n => Array\n(\n => 14.189.186.178\n)\n\n => Array\n(\n => 157.41.45.143\n)\n\n => Array\n(\n => 223.238.22.53\n)\n\n => Array\n(\n => 111.88.7.244\n)\n\n => Array\n(\n => 5.62.57.19\n)\n\n => Array\n(\n => 106.201.25.216\n)\n\n => Array\n(\n => 117.217.205.33\n)\n\n => Array\n(\n => 111.88.7.215\n)\n\n => Array\n(\n => 106.201.13.77\n)\n\n => Array\n(\n => 50.7.93.29\n)\n\n => Array\n(\n => 123.201.70.112\n)\n\n => Array\n(\n => 39.42.108.226\n)\n\n => Array\n(\n => 27.5.198.29\n)\n\n => Array\n(\n => 223.238.85.187\n)\n\n => Array\n(\n => 171.49.176.32\n)\n\n => Array\n(\n => 14.248.79.242\n)\n\n => Array\n(\n => 46.219.211.183\n)\n\n => Array\n(\n => 185.244.212.251\n)\n\n => Array\n(\n => 14.102.84.126\n)\n\n => Array\n(\n => 106.212.191.52\n)\n\n => Array\n(\n => 154.72.153.203\n)\n\n => Array\n(\n => 14.175.82.64\n)\n\n => Array\n(\n => 141.105.139.131\n)\n\n => Array\n(\n => 182.156.103.98\n)\n\n => Array\n(\n => 117.217.204.75\n)\n\n => Array\n(\n => 104.140.83.115\n)\n\n => Array\n(\n => 119.152.62.8\n)\n\n => Array\n(\n => 45.125.247.94\n)\n\n => Array\n(\n => 137.97.37.252\n)\n\n => Array\n(\n => 117.217.204.73\n)\n\n => Array\n(\n => 14.248.79.133\n)\n\n => Array\n(\n => 39.37.152.52\n)\n\n => Array\n(\n => 103.55.60.54\n)\n\n => Array\n(\n => 102.166.183.88\n)\n\n => Array\n(\n => 5.62.60.162\n)\n\n => Array\n(\n => 5.62.60.163\n)\n\n => Array\n(\n => 160.202.38.131\n)\n\n => Array\n(\n => 106.215.20.253\n)\n\n => Array\n(\n => 39.37.160.54\n)\n\n => Array\n(\n => 119.152.59.186\n)\n\n => Array\n(\n => 183.82.0.164\n)\n\n => Array\n(\n => 41.90.54.87\n)\n\n => Array\n(\n => 157.36.85.158\n)\n\n => Array\n(\n => 110.37.229.162\n)\n\n => Array\n(\n => 203.99.180.148\n)\n\n => Array\n(\n => 117.97.132.91\n)\n\n => Array\n(\n => 171.61.147.105\n)\n\n => Array\n(\n => 14.98.147.214\n)\n\n => Array\n(\n => 209.234.253.191\n)\n\n => Array\n(\n => 92.38.148.60\n)\n\n => Array\n(\n => 178.128.104.139\n)\n\n => Array\n(\n => 212.154.0.176\n)\n\n => Array\n(\n => 103.41.24.141\n)\n\n => Array\n(\n => 2.58.194.132\n)\n\n => Array\n(\n => 180.190.78.169\n)\n\n => Array\n(\n => 106.215.45.182\n)\n\n => Array\n(\n => 125.63.100.222\n)\n\n => Array\n(\n => 110.54.247.17\n)\n\n => Array\n(\n => 103.26.85.105\n)\n\n => Array\n(\n => 39.42.147.3\n)\n\n => Array\n(\n => 137.97.51.41\n)\n\n => Array\n(\n => 71.202.72.27\n)\n\n => Array\n(\n => 119.155.35.10\n)\n\n => Array\n(\n => 202.47.43.120\n)\n\n => Array\n(\n => 183.83.64.101\n)\n\n => Array\n(\n => 182.68.106.141\n)\n\n => Array\n(\n => 171.61.187.87\n)\n\n => Array\n(\n => 178.162.198.118\n)\n\n => Array\n(\n => 115.97.151.218\n)\n\n => Array\n(\n => 196.207.184.210\n)\n\n => Array\n(\n => 198.16.70.51\n)\n\n => Array\n(\n => 41.60.237.33\n)\n\n => Array\n(\n => 47.11.86.26\n)\n\n => Array\n(\n => 117.217.201.183\n)\n\n => Array\n(\n => 203.192.241.79\n)\n\n => Array\n(\n => 122.165.119.85\n)\n\n => Array\n(\n => 23.227.142.218\n)\n\n => Array\n(\n => 178.128.104.221\n)\n\n => Array\n(\n => 14.192.54.163\n)\n\n => Array\n(\n => 139.5.253.218\n)\n\n => Array\n(\n => 117.230.140.127\n)\n\n => Array\n(\n => 195.114.149.199\n)\n\n => Array\n(\n => 14.239.180.220\n)\n\n => Array\n(\n => 103.62.155.94\n)\n\n => Array\n(\n => 118.71.97.14\n)\n\n => Array\n(\n => 137.97.55.163\n)\n\n => Array\n(\n => 202.47.49.198\n)\n\n => Array\n(\n => 171.61.177.85\n)\n\n => Array\n(\n => 137.97.190.224\n)\n\n => Array\n(\n => 117.230.34.142\n)\n\n => Array\n(\n => 103.41.32.5\n)\n\n => Array\n(\n => 203.90.82.237\n)\n\n => Array\n(\n => 125.63.124.238\n)\n\n => Array\n(\n => 103.232.128.78\n)\n\n => Array\n(\n => 106.197.14.227\n)\n\n => Array\n(\n => 81.17.242.244\n)\n\n => Array\n(\n => 81.19.210.179\n)\n\n => Array\n(\n => 103.134.94.98\n)\n\n => Array\n(\n => 110.38.0.86\n)\n\n => Array\n(\n => 103.10.224.195\n)\n\n => Array\n(\n => 45.118.166.89\n)\n\n => Array\n(\n => 115.186.186.68\n)\n\n => Array\n(\n => 138.197.129.237\n)\n\n => Array\n(\n => 14.247.162.52\n)\n\n => Array\n(\n => 103.255.4.5\n)\n\n => Array\n(\n => 14.167.188.254\n)\n\n => Array\n(\n => 5.62.59.54\n)\n\n => Array\n(\n => 27.122.14.80\n)\n\n => Array\n(\n => 39.53.240.21\n)\n\n => Array\n(\n => 39.53.241.243\n)\n\n => Array\n(\n => 117.230.130.161\n)\n\n => Array\n(\n => 118.71.191.149\n)\n\n => Array\n(\n => 5.188.95.54\n)\n\n => Array\n(\n => 66.45.250.27\n)\n\n => Array\n(\n => 106.215.6.175\n)\n\n => Array\n(\n => 27.122.14.86\n)\n\n => Array\n(\n => 103.255.4.51\n)\n\n => Array\n(\n => 101.50.93.119\n)\n\n => Array\n(\n => 137.97.183.51\n)\n\n => Array\n(\n => 117.217.204.185\n)\n\n => Array\n(\n => 95.104.106.82\n)\n\n => Array\n(\n => 5.62.56.211\n)\n\n => Array\n(\n => 103.104.181.214\n)\n\n => Array\n(\n => 36.72.214.243\n)\n\n => Array\n(\n => 5.62.62.219\n)\n\n => Array\n(\n => 110.36.202.4\n)\n\n => Array\n(\n => 103.255.4.253\n)\n\n => Array\n(\n => 110.172.138.61\n)\n\n => Array\n(\n => 159.203.24.195\n)\n\n => Array\n(\n => 13.229.88.42\n)\n\n => Array\n(\n => 59.153.235.20\n)\n\n => Array\n(\n => 171.236.169.32\n)\n\n => Array\n(\n => 14.231.85.206\n)\n\n => Array\n(\n => 119.152.54.103\n)\n\n => Array\n(\n => 103.80.117.202\n)\n\n => Array\n(\n => 223.179.157.75\n)\n\n => Array\n(\n => 122.173.68.249\n)\n\n => Array\n(\n => 188.163.72.113\n)\n\n => Array\n(\n => 119.155.20.164\n)\n\n => Array\n(\n => 103.121.43.68\n)\n\n => Array\n(\n => 5.62.58.6\n)\n\n => Array\n(\n => 203.122.40.154\n)\n\n => Array\n(\n => 222.254.96.203\n)\n\n => Array\n(\n => 103.83.148.167\n)\n\n => Array\n(\n => 103.87.251.226\n)\n\n => Array\n(\n => 123.24.129.24\n)\n\n => Array\n(\n => 137.97.83.8\n)\n\n => Array\n(\n => 223.225.33.132\n)\n\n => Array\n(\n => 128.76.175.190\n)\n\n => Array\n(\n => 195.85.219.32\n)\n\n => Array\n(\n => 139.167.102.93\n)\n\n => Array\n(\n => 49.15.198.253\n)\n\n => Array\n(\n => 45.152.183.172\n)\n\n => Array\n(\n => 42.106.180.136\n)\n\n => Array\n(\n => 95.142.120.9\n)\n\n => Array\n(\n => 139.167.236.4\n)\n\n => Array\n(\n => 159.65.72.167\n)\n\n => Array\n(\n => 49.15.89.2\n)\n\n => Array\n(\n => 42.201.161.195\n)\n\n => Array\n(\n => 27.97.210.38\n)\n\n => Array\n(\n => 171.241.45.19\n)\n\n => Array\n(\n => 42.108.2.18\n)\n\n => Array\n(\n => 171.236.40.68\n)\n\n => Array\n(\n => 110.93.82.102\n)\n\n => Array\n(\n => 43.225.24.186\n)\n\n => Array\n(\n => 117.230.189.119\n)\n\n => Array\n(\n => 124.123.147.187\n)\n\n => Array\n(\n => 216.151.184.250\n)\n\n => Array\n(\n => 49.15.133.16\n)\n\n => Array\n(\n => 49.15.220.74\n)\n\n => Array\n(\n => 157.37.221.246\n)\n\n => Array\n(\n => 176.124.233.112\n)\n\n => Array\n(\n => 118.71.167.40\n)\n\n => Array\n(\n => 182.185.213.161\n)\n\n => Array\n(\n => 47.31.79.248\n)\n\n => Array\n(\n => 223.179.238.192\n)\n\n => Array\n(\n => 79.110.128.219\n)\n\n => Array\n(\n => 106.210.42.111\n)\n\n => Array\n(\n => 47.247.214.229\n)\n\n => Array\n(\n => 193.0.220.108\n)\n\n => Array\n(\n => 1.39.206.254\n)\n\n => Array\n(\n => 123.201.77.38\n)\n\n => Array\n(\n => 115.178.207.21\n)\n\n => Array\n(\n => 37.111.202.92\n)\n\n => Array\n(\n => 49.14.179.243\n)\n\n => Array\n(\n => 117.230.145.171\n)\n\n => Array\n(\n => 171.229.242.96\n)\n\n => Array\n(\n => 27.59.174.209\n)\n\n => Array\n(\n => 1.38.202.211\n)\n\n => Array\n(\n => 157.37.128.46\n)\n\n => Array\n(\n => 49.15.94.80\n)\n\n => Array\n(\n => 123.25.46.147\n)\n\n => Array\n(\n => 117.230.170.185\n)\n\n => Array\n(\n => 5.62.16.19\n)\n\n => Array\n(\n => 103.18.22.25\n)\n\n => Array\n(\n => 103.46.200.132\n)\n\n => Array\n(\n => 27.97.165.126\n)\n\n => Array\n(\n => 117.230.54.241\n)\n\n => Array\n(\n => 27.97.209.76\n)\n\n => Array\n(\n => 47.31.182.109\n)\n\n => Array\n(\n => 47.30.223.221\n)\n\n => Array\n(\n => 103.31.94.82\n)\n\n => Array\n(\n => 103.211.14.45\n)\n\n => Array\n(\n => 171.49.233.58\n)\n\n => Array\n(\n => 65.49.126.95\n)\n\n => Array\n(\n => 69.255.101.170\n)\n\n => Array\n(\n => 27.56.224.67\n)\n\n => Array\n(\n => 117.230.146.86\n)\n\n => Array\n(\n => 27.59.154.52\n)\n\n => Array\n(\n => 132.154.114.10\n)\n\n => Array\n(\n => 182.186.77.60\n)\n\n => Array\n(\n => 117.230.136.74\n)\n\n => Array\n(\n => 43.251.94.253\n)\n\n => Array\n(\n => 103.79.168.225\n)\n\n => Array\n(\n => 117.230.56.51\n)\n\n => Array\n(\n => 27.97.187.45\n)\n\n => Array\n(\n => 137.97.190.61\n)\n\n => Array\n(\n => 193.0.220.26\n)\n\n => Array\n(\n => 49.36.137.62\n)\n\n => Array\n(\n => 47.30.189.248\n)\n\n => Array\n(\n => 109.169.23.84\n)\n\n => Array\n(\n => 111.119.185.46\n)\n\n => Array\n(\n => 103.83.148.246\n)\n\n => Array\n(\n => 157.32.119.138\n)\n\n => Array\n(\n => 5.62.41.53\n)\n\n => Array\n(\n => 47.8.243.236\n)\n\n => Array\n(\n => 112.79.158.69\n)\n\n => Array\n(\n => 180.92.148.218\n)\n\n => Array\n(\n => 157.36.162.154\n)\n\n => Array\n(\n => 39.46.114.47\n)\n\n => Array\n(\n => 117.230.173.250\n)\n\n => Array\n(\n => 117.230.155.188\n)\n\n => Array\n(\n => 193.0.220.17\n)\n\n => Array\n(\n => 117.230.171.166\n)\n\n => Array\n(\n => 49.34.59.228\n)\n\n => Array\n(\n => 111.88.197.247\n)\n\n => Array\n(\n => 47.31.156.112\n)\n\n => Array\n(\n => 137.97.64.180\n)\n\n => Array\n(\n => 14.244.227.18\n)\n\n => Array\n(\n => 113.167.158.8\n)\n\n => Array\n(\n => 39.37.175.189\n)\n\n => Array\n(\n => 139.167.211.8\n)\n\n => Array\n(\n => 73.120.85.235\n)\n\n => Array\n(\n => 104.236.195.72\n)\n\n => Array\n(\n => 27.97.190.71\n)\n\n => Array\n(\n => 79.46.170.222\n)\n\n => Array\n(\n => 102.185.244.207\n)\n\n => Array\n(\n => 37.111.136.30\n)\n\n => Array\n(\n => 50.7.93.28\n)\n\n => Array\n(\n => 110.54.251.43\n)\n\n => Array\n(\n => 49.36.143.40\n)\n\n => Array\n(\n => 103.130.112.185\n)\n\n => Array\n(\n => 37.111.139.202\n)\n\n => Array\n(\n => 49.36.139.108\n)\n\n => Array\n(\n => 37.111.136.179\n)\n\n => Array\n(\n => 123.17.165.77\n)\n\n => Array\n(\n => 49.207.143.206\n)\n\n => Array\n(\n => 39.53.80.149\n)\n\n => Array\n(\n => 223.188.71.214\n)\n\n => Array\n(\n => 1.39.222.233\n)\n\n => Array\n(\n => 117.230.9.85\n)\n\n => Array\n(\n => 103.251.245.216\n)\n\n => Array\n(\n => 122.169.133.145\n)\n\n => Array\n(\n => 43.250.165.57\n)\n\n => Array\n(\n => 39.44.13.235\n)\n\n => Array\n(\n => 157.47.181.2\n)\n\n => Array\n(\n => 27.56.203.50\n)\n\n => Array\n(\n => 191.96.97.58\n)\n\n => Array\n(\n => 111.88.107.172\n)\n\n => Array\n(\n => 113.193.198.136\n)\n\n => Array\n(\n => 117.230.172.175\n)\n\n => Array\n(\n => 191.96.182.239\n)\n\n => Array\n(\n => 2.58.46.28\n)\n\n => Array\n(\n => 183.83.253.87\n)\n\n => Array\n(\n => 49.15.139.242\n)\n\n => Array\n(\n => 42.107.220.236\n)\n\n => Array\n(\n => 14.192.53.196\n)\n\n => Array\n(\n => 42.119.212.202\n)\n\n => Array\n(\n => 192.158.234.45\n)\n\n => Array\n(\n => 49.149.102.192\n)\n\n => Array\n(\n => 47.8.170.17\n)\n\n => Array\n(\n => 117.197.13.247\n)\n\n => Array\n(\n => 116.74.34.44\n)\n\n => Array\n(\n => 103.79.249.163\n)\n\n => Array\n(\n => 182.189.95.70\n)\n\n => Array\n(\n => 137.59.218.118\n)\n\n => Array\n(\n => 103.79.170.243\n)\n\n => Array\n(\n => 39.40.54.25\n)\n\n => Array\n(\n => 119.155.40.170\n)\n\n => Array\n(\n => 1.39.212.157\n)\n\n => Array\n(\n => 70.127.59.89\n)\n\n => Array\n(\n => 14.171.22.58\n)\n\n => Array\n(\n => 194.44.167.141\n)\n\n => Array\n(\n => 111.88.179.154\n)\n\n => Array\n(\n => 117.230.140.232\n)\n\n => Array\n(\n => 137.97.96.128\n)\n\n => Array\n(\n => 198.16.66.123\n)\n\n => Array\n(\n => 106.198.44.193\n)\n\n => Array\n(\n => 119.153.45.75\n)\n\n => Array\n(\n => 49.15.242.208\n)\n\n => Array\n(\n => 119.155.241.20\n)\n\n => Array\n(\n => 106.223.109.155\n)\n\n => Array\n(\n => 119.160.119.245\n)\n\n => Array\n(\n => 106.215.81.160\n)\n\n => Array\n(\n => 1.39.192.211\n)\n\n => Array\n(\n => 223.230.35.208\n)\n\n => Array\n(\n => 39.59.4.158\n)\n\n => Array\n(\n => 43.231.57.234\n)\n\n => Array\n(\n => 60.254.78.193\n)\n\n => Array\n(\n => 122.170.224.87\n)\n\n => Array\n(\n => 117.230.22.141\n)\n\n => Array\n(\n => 119.152.107.211\n)\n\n => Array\n(\n => 103.87.192.206\n)\n\n => Array\n(\n => 39.45.244.47\n)\n\n => Array\n(\n => 50.72.141.94\n)\n\n => Array\n(\n => 39.40.6.128\n)\n\n => Array\n(\n => 39.45.180.186\n)\n\n => Array\n(\n => 49.207.131.233\n)\n\n => Array\n(\n => 139.59.69.142\n)\n\n => Array\n(\n => 111.119.187.29\n)\n\n => Array\n(\n => 119.153.40.69\n)\n\n => Array\n(\n => 49.36.133.64\n)\n\n => Array\n(\n => 103.255.4.249\n)\n\n => Array\n(\n => 198.144.154.15\n)\n\n => Array\n(\n => 1.22.46.172\n)\n\n => Array\n(\n => 103.255.5.46\n)\n\n => Array\n(\n => 27.56.195.188\n)\n\n => Array\n(\n => 203.101.167.53\n)\n\n => Array\n(\n => 117.230.62.195\n)\n\n => Array\n(\n => 103.240.194.186\n)\n\n => Array\n(\n => 107.170.166.118\n)\n\n => Array\n(\n => 101.53.245.80\n)\n\n => Array\n(\n => 157.43.13.208\n)\n\n => Array\n(\n => 137.97.100.77\n)\n\n => Array\n(\n => 47.31.150.208\n)\n\n => Array\n(\n => 137.59.222.65\n)\n\n => Array\n(\n => 103.85.127.250\n)\n\n => Array\n(\n => 103.214.119.32\n)\n\n => Array\n(\n => 182.255.49.52\n)\n\n => Array\n(\n => 103.75.247.72\n)\n\n => Array\n(\n => 103.85.125.250\n)\n\n => Array\n(\n => 183.83.253.167\n)\n\n => Array\n(\n => 1.39.222.111\n)\n\n => Array\n(\n => 111.119.185.9\n)\n\n => Array\n(\n => 111.119.187.10\n)\n\n => Array\n(\n => 39.37.147.144\n)\n\n => Array\n(\n => 103.200.198.183\n)\n\n => Array\n(\n => 1.39.222.18\n)\n\n => Array\n(\n => 198.8.80.103\n)\n\n => Array\n(\n => 42.108.1.243\n)\n\n => Array\n(\n => 111.119.187.16\n)\n\n => Array\n(\n => 39.40.241.8\n)\n\n => Array\n(\n => 122.169.150.158\n)\n\n => Array\n(\n => 39.40.215.119\n)\n\n => Array\n(\n => 103.255.5.77\n)\n\n => Array\n(\n => 157.38.108.196\n)\n\n => Array\n(\n => 103.255.4.67\n)\n\n => Array\n(\n => 5.62.60.62\n)\n\n => Array\n(\n => 39.37.146.202\n)\n\n => Array\n(\n => 110.138.6.221\n)\n\n => Array\n(\n => 49.36.143.88\n)\n\n => Array\n(\n => 37.1.215.39\n)\n\n => Array\n(\n => 27.106.59.190\n)\n\n => Array\n(\n => 139.167.139.41\n)\n\n => Array\n(\n => 114.142.166.179\n)\n\n => Array\n(\n => 223.225.240.112\n)\n\n => Array\n(\n => 103.255.5.36\n)\n\n => Array\n(\n => 175.136.1.48\n)\n\n => Array\n(\n => 103.82.80.166\n)\n\n => Array\n(\n => 182.185.196.126\n)\n\n => Array\n(\n => 157.43.45.76\n)\n\n => Array\n(\n => 119.152.132.49\n)\n\n => Array\n(\n => 5.62.62.162\n)\n\n => Array\n(\n => 103.255.4.39\n)\n\n => Array\n(\n => 202.5.144.153\n)\n\n => Array\n(\n => 1.39.223.210\n)\n\n => Array\n(\n => 92.38.176.154\n)\n\n => Array\n(\n => 117.230.186.142\n)\n\n => Array\n(\n => 183.83.39.123\n)\n\n => Array\n(\n => 182.185.156.76\n)\n\n => Array\n(\n => 104.236.74.212\n)\n\n => Array\n(\n => 107.170.145.187\n)\n\n => Array\n(\n => 117.102.7.98\n)\n\n => Array\n(\n => 137.59.220.0\n)\n\n => Array\n(\n => 157.47.222.14\n)\n\n => Array\n(\n => 47.15.206.82\n)\n\n => Array\n(\n => 117.230.159.99\n)\n\n => Array\n(\n => 117.230.175.151\n)\n\n => Array\n(\n => 157.50.97.18\n)\n\n => Array\n(\n => 117.230.47.164\n)\n\n => Array\n(\n => 77.111.244.34\n)\n\n => Array\n(\n => 139.167.189.131\n)\n\n => Array\n(\n => 1.39.204.103\n)\n\n => Array\n(\n => 117.230.58.0\n)\n\n => Array\n(\n => 182.185.226.66\n)\n\n => Array\n(\n => 115.42.70.119\n)\n\n => Array\n(\n => 171.48.114.134\n)\n\n => Array\n(\n => 144.34.218.75\n)\n\n => Array\n(\n => 199.58.164.135\n)\n\n => Array\n(\n => 101.53.228.151\n)\n\n => Array\n(\n => 117.230.50.57\n)\n\n => Array\n(\n => 223.225.138.84\n)\n\n => Array\n(\n => 110.225.67.65\n)\n\n => Array\n(\n => 47.15.200.39\n)\n\n => Array\n(\n => 39.42.20.127\n)\n\n => Array\n(\n => 117.97.241.81\n)\n\n => Array\n(\n => 111.119.185.11\n)\n\n => Array\n(\n => 103.100.5.94\n)\n\n => Array\n(\n => 103.25.137.69\n)\n\n => Array\n(\n => 47.15.197.159\n)\n\n => Array\n(\n => 223.188.176.122\n)\n\n => Array\n(\n => 27.4.175.80\n)\n\n => Array\n(\n => 181.215.43.82\n)\n\n => Array\n(\n => 27.56.228.157\n)\n\n => Array\n(\n => 117.230.19.19\n)\n\n => Array\n(\n => 47.15.208.71\n)\n\n => Array\n(\n => 119.155.21.176\n)\n\n => Array\n(\n => 47.15.234.202\n)\n\n => Array\n(\n => 117.230.144.135\n)\n\n => Array\n(\n => 112.79.139.199\n)\n\n => Array\n(\n => 116.75.246.41\n)\n\n => Array\n(\n => 117.230.177.126\n)\n\n => Array\n(\n => 212.103.48.134\n)\n\n => Array\n(\n => 102.69.228.78\n)\n\n => Array\n(\n => 117.230.37.118\n)\n\n => Array\n(\n => 175.143.61.75\n)\n\n => Array\n(\n => 139.167.56.138\n)\n\n => Array\n(\n => 58.145.189.250\n)\n\n => Array\n(\n => 103.255.5.65\n)\n\n => Array\n(\n => 39.37.153.182\n)\n\n => Array\n(\n => 157.43.85.106\n)\n\n => Array\n(\n => 185.209.178.77\n)\n\n => Array\n(\n => 1.39.212.45\n)\n\n => Array\n(\n => 103.72.7.16\n)\n\n => Array\n(\n => 117.97.185.244\n)\n\n => Array\n(\n => 117.230.59.106\n)\n\n => Array\n(\n => 137.97.121.103\n)\n\n => Array\n(\n => 103.82.123.215\n)\n\n => Array\n(\n => 103.68.217.248\n)\n\n => Array\n(\n => 157.39.27.175\n)\n\n => Array\n(\n => 47.31.100.249\n)\n\n => Array\n(\n => 14.171.232.139\n)\n\n => Array\n(\n => 103.31.93.208\n)\n\n => Array\n(\n => 117.230.56.77\n)\n\n => Array\n(\n => 124.182.25.124\n)\n\n => Array\n(\n => 106.66.191.242\n)\n\n => Array\n(\n => 175.107.237.25\n)\n\n => Array\n(\n => 119.155.1.27\n)\n\n => Array\n(\n => 72.255.6.24\n)\n\n => Array\n(\n => 192.140.152.223\n)\n\n => Array\n(\n => 212.103.48.136\n)\n\n => Array\n(\n => 39.45.134.56\n)\n\n => Array\n(\n => 139.167.173.30\n)\n\n => Array\n(\n => 117.230.63.87\n)\n\n => Array\n(\n => 182.189.95.203\n)\n\n => Array\n(\n => 49.204.183.248\n)\n\n => Array\n(\n => 47.31.125.188\n)\n\n => Array\n(\n => 103.252.171.13\n)\n\n => Array\n(\n => 112.198.74.36\n)\n\n => Array\n(\n => 27.109.113.152\n)\n\n => Array\n(\n => 42.112.233.44\n)\n\n => Array\n(\n => 47.31.68.193\n)\n\n => Array\n(\n => 103.252.171.134\n)\n\n => Array\n(\n => 77.123.32.114\n)\n\n => Array\n(\n => 1.38.189.66\n)\n\n => Array\n(\n => 39.37.181.108\n)\n\n => Array\n(\n => 42.106.44.61\n)\n\n => Array\n(\n => 157.36.8.39\n)\n\n => Array\n(\n => 223.238.41.53\n)\n\n => Array\n(\n => 202.89.77.10\n)\n\n => Array\n(\n => 117.230.150.68\n)\n\n => Array\n(\n => 175.176.87.60\n)\n\n => Array\n(\n => 137.97.117.87\n)\n\n => Array\n(\n => 132.154.123.11\n)\n\n => Array\n(\n => 45.113.124.141\n)\n\n => Array\n(\n => 103.87.56.203\n)\n\n => Array\n(\n => 159.89.171.156\n)\n\n => Array\n(\n => 119.155.53.88\n)\n\n => Array\n(\n => 222.252.107.215\n)\n\n => Array\n(\n => 132.154.75.238\n)\n\n => Array\n(\n => 122.183.41.168\n)\n\n => Array\n(\n => 42.106.254.158\n)\n\n => Array\n(\n => 103.252.171.37\n)\n\n => Array\n(\n => 202.59.13.180\n)\n\n => Array\n(\n => 37.111.139.137\n)\n\n => Array\n(\n => 39.42.93.25\n)\n\n => Array\n(\n => 118.70.177.156\n)\n\n => Array\n(\n => 117.230.148.64\n)\n\n => Array\n(\n => 39.42.15.194\n)\n\n => Array\n(\n => 137.97.176.86\n)\n\n => Array\n(\n => 106.210.102.113\n)\n\n => Array\n(\n => 39.59.84.236\n)\n\n => Array\n(\n => 49.206.187.177\n)\n\n => Array\n(\n => 117.230.133.11\n)\n\n => Array\n(\n => 42.106.253.173\n)\n\n => Array\n(\n => 178.62.102.23\n)\n\n => Array\n(\n => 111.92.76.175\n)\n\n => Array\n(\n => 132.154.86.45\n)\n\n => Array\n(\n => 117.230.128.39\n)\n\n => Array\n(\n => 117.230.53.165\n)\n\n => Array\n(\n => 49.37.200.171\n)\n\n => Array\n(\n => 104.236.213.230\n)\n\n => Array\n(\n => 103.140.30.81\n)\n\n => Array\n(\n => 59.103.104.117\n)\n\n => Array\n(\n => 65.49.126.79\n)\n\n => Array\n(\n => 202.59.12.251\n)\n\n => Array\n(\n => 37.111.136.17\n)\n\n => Array\n(\n => 163.53.85.67\n)\n\n => Array\n(\n => 123.16.240.73\n)\n\n => Array\n(\n => 103.211.14.183\n)\n\n => Array\n(\n => 103.248.93.211\n)\n\n => Array\n(\n => 116.74.59.127\n)\n\n => Array\n(\n => 137.97.169.254\n)\n\n => Array\n(\n => 113.177.79.100\n)\n\n => Array\n(\n => 74.82.60.187\n)\n\n => Array\n(\n => 117.230.157.66\n)\n\n => Array\n(\n => 169.149.194.241\n)\n\n => Array\n(\n => 117.230.156.11\n)\n\n => Array\n(\n => 202.59.12.157\n)\n\n => Array\n(\n => 42.106.181.25\n)\n\n => Array\n(\n => 202.59.13.78\n)\n\n => Array\n(\n => 39.37.153.32\n)\n\n => Array\n(\n => 177.188.216.175\n)\n\n => Array\n(\n => 222.252.53.165\n)\n\n => Array\n(\n => 37.139.23.89\n)\n\n => Array\n(\n => 117.230.139.150\n)\n\n => Array\n(\n => 104.131.176.234\n)\n\n => Array\n(\n => 42.106.181.117\n)\n\n => Array\n(\n => 117.230.180.94\n)\n\n => Array\n(\n => 180.190.171.5\n)\n\n => Array\n(\n => 150.129.165.185\n)\n\n => Array\n(\n => 51.15.0.150\n)\n\n => Array\n(\n => 42.111.4.84\n)\n\n => Array\n(\n => 74.82.60.116\n)\n\n => Array\n(\n => 137.97.121.165\n)\n\n => Array\n(\n => 64.62.187.194\n)\n\n => Array\n(\n => 137.97.106.162\n)\n\n => Array\n(\n => 137.97.92.46\n)\n\n => Array\n(\n => 137.97.170.25\n)\n\n => Array\n(\n => 103.104.192.100\n)\n\n => Array\n(\n => 185.246.211.34\n)\n\n => Array\n(\n => 119.160.96.78\n)\n\n => Array\n(\n => 212.103.48.152\n)\n\n => Array\n(\n => 183.83.153.90\n)\n\n => Array\n(\n => 117.248.150.41\n)\n\n => Array\n(\n => 185.240.246.180\n)\n\n => Array\n(\n => 162.253.131.125\n)\n\n => Array\n(\n => 117.230.153.217\n)\n\n => Array\n(\n => 117.230.169.1\n)\n\n => Array\n(\n => 49.15.138.247\n)\n\n => Array\n(\n => 117.230.37.110\n)\n\n => Array\n(\n => 14.167.188.75\n)\n\n => Array\n(\n => 169.149.239.93\n)\n\n => Array\n(\n => 103.216.176.91\n)\n\n => Array\n(\n => 117.230.12.126\n)\n\n => Array\n(\n => 184.75.209.110\n)\n\n => Array\n(\n => 117.230.6.60\n)\n\n => Array\n(\n => 117.230.135.132\n)\n\n => Array\n(\n => 31.179.29.109\n)\n\n => Array\n(\n => 74.121.188.186\n)\n\n => Array\n(\n => 117.230.35.5\n)\n\n => Array\n(\n => 111.92.74.239\n)\n\n => Array\n(\n => 104.245.144.236\n)\n\n => Array\n(\n => 39.50.22.100\n)\n\n => Array\n(\n => 47.31.190.23\n)\n\n => Array\n(\n => 157.44.73.187\n)\n\n => Array\n(\n => 117.230.8.91\n)\n\n => Array\n(\n => 157.32.18.2\n)\n\n => Array\n(\n => 111.119.187.43\n)\n\n => Array\n(\n => 203.101.185.246\n)\n\n => Array\n(\n => 5.62.34.22\n)\n\n => Array\n(\n => 122.8.143.76\n)\n\n => Array\n(\n => 115.186.2.187\n)\n\n => Array\n(\n => 202.142.110.89\n)\n\n => Array\n(\n => 157.50.61.254\n)\n\n => Array\n(\n => 223.182.211.185\n)\n\n => Array\n(\n => 103.85.125.210\n)\n\n => Array\n(\n => 103.217.133.147\n)\n\n => Array\n(\n => 103.60.196.217\n)\n\n => Array\n(\n => 157.44.238.6\n)\n\n => Array\n(\n => 117.196.225.68\n)\n\n => Array\n(\n => 104.254.92.52\n)\n\n => Array\n(\n => 39.42.46.72\n)\n\n => Array\n(\n => 221.132.119.36\n)\n\n => Array\n(\n => 111.92.77.47\n)\n\n => Array\n(\n => 223.225.19.152\n)\n\n => Array\n(\n => 159.89.121.217\n)\n\n => Array\n(\n => 39.53.221.205\n)\n\n => Array\n(\n => 193.34.217.28\n)\n\n => Array\n(\n => 139.167.206.36\n)\n\n => Array\n(\n => 96.40.10.7\n)\n\n => Array\n(\n => 124.29.198.123\n)\n\n => Array\n(\n => 117.196.226.1\n)\n\n => Array\n(\n => 106.200.85.135\n)\n\n => Array\n(\n => 106.223.180.28\n)\n\n => Array\n(\n => 103.49.232.110\n)\n\n => Array\n(\n => 139.167.208.50\n)\n\n => Array\n(\n => 139.167.201.102\n)\n\n => Array\n(\n => 14.244.224.237\n)\n\n => Array\n(\n => 103.140.31.187\n)\n\n => Array\n(\n => 49.36.134.136\n)\n\n => Array\n(\n => 160.16.61.75\n)\n\n => Array\n(\n => 103.18.22.228\n)\n\n => Array\n(\n => 47.9.74.121\n)\n\n => Array\n(\n => 47.30.216.159\n)\n\n => Array\n(\n => 117.248.150.78\n)\n\n => Array\n(\n => 5.62.34.17\n)\n\n => Array\n(\n => 139.167.247.181\n)\n\n => Array\n(\n => 193.176.84.29\n)\n\n => Array\n(\n => 103.195.201.121\n)\n\n => Array\n(\n => 89.187.175.115\n)\n\n => Array\n(\n => 137.97.81.251\n)\n\n => Array\n(\n => 157.51.147.62\n)\n\n => Array\n(\n => 103.104.192.42\n)\n\n => Array\n(\n => 14.171.235.26\n)\n\n => Array\n(\n => 178.62.89.121\n)\n\n => Array\n(\n => 119.155.4.164\n)\n\n => Array\n(\n => 43.250.241.89\n)\n\n => Array\n(\n => 103.31.100.80\n)\n\n => Array\n(\n => 119.155.7.44\n)\n\n => Array\n(\n => 106.200.73.114\n)\n\n => Array\n(\n => 77.111.246.18\n)\n\n => Array\n(\n => 157.39.99.247\n)\n\n => Array\n(\n => 103.77.42.132\n)\n\n => Array\n(\n => 74.115.214.133\n)\n\n => Array\n(\n => 117.230.49.224\n)\n\n => Array\n(\n => 39.50.108.238\n)\n\n => Array\n(\n => 47.30.221.45\n)\n\n => Array\n(\n => 95.133.164.235\n)\n\n => Array\n(\n => 212.103.48.141\n)\n\n => Array\n(\n => 104.194.218.147\n)\n\n => Array\n(\n => 106.200.88.241\n)\n\n => Array\n(\n => 182.189.212.211\n)\n\n => Array\n(\n => 39.50.142.129\n)\n\n => Array\n(\n => 77.234.43.133\n)\n\n => Array\n(\n => 49.15.192.58\n)\n\n => Array\n(\n => 119.153.37.55\n)\n\n => Array\n(\n => 27.56.156.128\n)\n\n => Array\n(\n => 168.211.4.33\n)\n\n => Array\n(\n => 203.81.236.239\n)\n\n => Array\n(\n => 157.51.149.61\n)\n\n => Array\n(\n => 117.230.45.255\n)\n\n => Array\n(\n => 39.42.106.169\n)\n\n => Array\n(\n => 27.71.89.76\n)\n\n => Array\n(\n => 123.27.109.167\n)\n\n => Array\n(\n => 106.202.21.91\n)\n\n => Array\n(\n => 103.85.125.206\n)\n\n => Array\n(\n => 122.173.250.229\n)\n\n => Array\n(\n => 106.210.102.77\n)\n\n => Array\n(\n => 134.209.47.156\n)\n\n => Array\n(\n => 45.127.232.12\n)\n\n => Array\n(\n => 45.134.224.11\n)\n\n => Array\n(\n => 27.71.89.122\n)\n\n => Array\n(\n => 157.38.105.117\n)\n\n => Array\n(\n => 191.96.73.215\n)\n\n => Array\n(\n => 171.241.92.31\n)\n\n => Array\n(\n => 49.149.104.235\n)\n\n => Array\n(\n => 104.229.247.252\n)\n\n => Array\n(\n => 111.92.78.42\n)\n\n => Array\n(\n => 47.31.88.183\n)\n\n => Array\n(\n => 171.61.203.234\n)\n\n => Array\n(\n => 183.83.226.192\n)\n\n => Array\n(\n => 119.157.107.45\n)\n\n => Array\n(\n => 91.202.163.205\n)\n\n => Array\n(\n => 157.43.62.108\n)\n\n => Array\n(\n => 182.68.248.92\n)\n\n => Array\n(\n => 157.32.251.234\n)\n\n => Array\n(\n => 110.225.196.188\n)\n\n => Array\n(\n => 27.71.89.98\n)\n\n => Array\n(\n => 175.176.87.3\n)\n\n => Array\n(\n => 103.55.90.208\n)\n\n => Array\n(\n => 47.31.41.163\n)\n\n => Array\n(\n => 223.182.195.5\n)\n\n => Array\n(\n => 122.52.101.166\n)\n\n => Array\n(\n => 103.207.82.154\n)\n\n => Array\n(\n => 171.224.178.84\n)\n\n => Array\n(\n => 110.225.235.187\n)\n\n => Array\n(\n => 119.160.97.248\n)\n\n => Array\n(\n => 116.90.101.121\n)\n\n => Array\n(\n => 182.255.48.154\n)\n\n => Array\n(\n => 180.149.221.140\n)\n\n => Array\n(\n => 194.44.79.13\n)\n\n => Array\n(\n => 47.247.18.3\n)\n\n => Array\n(\n => 27.56.242.95\n)\n\n => Array\n(\n => 41.60.236.83\n)\n\n => Array\n(\n => 122.164.162.7\n)\n\n => Array\n(\n => 71.136.154.5\n)\n\n => Array\n(\n => 132.154.119.122\n)\n\n => Array\n(\n => 110.225.80.135\n)\n\n => Array\n(\n => 84.17.61.143\n)\n\n => Array\n(\n => 119.160.102.244\n)\n\n => Array\n(\n => 47.31.27.44\n)\n\n => Array\n(\n => 27.71.89.160\n)\n\n => Array\n(\n => 107.175.38.101\n)\n\n => Array\n(\n => 195.211.150.152\n)\n\n => Array\n(\n => 157.35.250.255\n)\n\n => Array\n(\n => 111.119.187.53\n)\n\n => Array\n(\n => 119.152.97.213\n)\n\n => Array\n(\n => 180.92.143.145\n)\n\n => Array\n(\n => 72.255.61.46\n)\n\n => Array\n(\n => 47.8.183.6\n)\n\n => Array\n(\n => 92.38.148.53\n)\n\n => Array\n(\n => 122.173.194.72\n)\n\n => Array\n(\n => 183.83.226.97\n)\n\n => Array\n(\n => 122.173.73.231\n)\n\n => Array\n(\n => 119.160.101.101\n)\n\n => Array\n(\n => 93.177.75.174\n)\n\n => Array\n(\n => 115.97.196.70\n)\n\n => Array\n(\n => 111.119.187.35\n)\n\n => Array\n(\n => 103.226.226.154\n)\n\n => Array\n(\n => 103.244.172.73\n)\n\n => Array\n(\n => 119.155.61.222\n)\n\n => Array\n(\n => 157.37.184.92\n)\n\n => Array\n(\n => 119.160.103.204\n)\n\n => Array\n(\n => 175.176.87.21\n)\n\n => Array\n(\n => 185.51.228.246\n)\n\n => Array\n(\n => 103.250.164.255\n)\n\n => Array\n(\n => 122.181.194.16\n)\n\n => Array\n(\n => 157.37.230.232\n)\n\n => Array\n(\n => 103.105.236.6\n)\n\n => Array\n(\n => 111.88.128.174\n)\n\n => Array\n(\n => 37.111.139.82\n)\n\n => Array\n(\n => 39.34.133.52\n)\n\n => Array\n(\n => 113.177.79.80\n)\n\n => Array\n(\n => 180.183.71.184\n)\n\n => Array\n(\n => 116.72.218.255\n)\n\n => Array\n(\n => 119.160.117.26\n)\n\n => Array\n(\n => 158.222.0.252\n)\n\n => Array\n(\n => 23.227.142.146\n)\n\n => Array\n(\n => 122.162.152.152\n)\n\n => Array\n(\n => 103.255.149.106\n)\n\n => Array\n(\n => 104.236.53.155\n)\n\n => Array\n(\n => 119.160.119.155\n)\n\n => Array\n(\n => 175.107.214.244\n)\n\n => Array\n(\n => 102.7.116.7\n)\n\n => Array\n(\n => 111.88.91.132\n)\n\n => Array\n(\n => 119.157.248.108\n)\n\n => Array\n(\n => 222.252.36.107\n)\n\n => Array\n(\n => 157.46.209.227\n)\n\n => Array\n(\n => 39.40.54.1\n)\n\n => Array\n(\n => 223.225.19.254\n)\n\n => Array\n(\n => 154.72.150.8\n)\n\n => Array\n(\n => 107.181.177.130\n)\n\n => Array\n(\n => 101.50.75.31\n)\n\n => Array\n(\n => 84.17.58.69\n)\n\n => Array\n(\n => 178.62.5.157\n)\n\n => Array\n(\n => 112.206.175.147\n)\n\n => Array\n(\n => 137.97.113.137\n)\n\n => Array\n(\n => 103.53.44.154\n)\n\n => Array\n(\n => 180.92.143.129\n)\n\n => Array\n(\n => 14.231.223.7\n)\n\n => Array\n(\n => 167.88.63.201\n)\n\n => Array\n(\n => 103.140.204.8\n)\n\n => Array\n(\n => 221.121.135.108\n)\n\n => Array\n(\n => 119.160.97.129\n)\n\n => Array\n(\n => 27.5.168.249\n)\n\n => Array\n(\n => 119.160.102.191\n)\n\n => Array\n(\n => 122.162.219.12\n)\n\n => Array\n(\n => 157.50.141.122\n)\n\n => Array\n(\n => 43.245.8.17\n)\n\n => Array\n(\n => 113.181.198.179\n)\n\n => Array\n(\n => 47.30.221.59\n)\n\n => Array\n(\n => 110.38.29.246\n)\n\n => Array\n(\n => 14.192.140.199\n)\n\n => Array\n(\n => 24.68.10.106\n)\n\n => Array\n(\n => 47.30.209.179\n)\n\n => Array\n(\n => 106.223.123.21\n)\n\n => Array\n(\n => 103.224.48.30\n)\n\n => Array\n(\n => 104.131.19.173\n)\n\n => Array\n(\n => 119.157.100.206\n)\n\n => Array\n(\n => 103.10.226.73\n)\n\n => Array\n(\n => 162.208.51.163\n)\n\n => Array\n(\n => 47.30.221.227\n)\n\n => Array\n(\n => 119.160.116.210\n)\n\n => Array\n(\n => 198.16.78.43\n)\n\n => Array\n(\n => 39.44.201.151\n)\n\n => Array\n(\n => 71.63.181.84\n)\n\n => Array\n(\n => 14.142.192.218\n)\n\n => Array\n(\n => 39.34.147.178\n)\n\n => Array\n(\n => 111.92.75.25\n)\n\n => Array\n(\n => 45.135.239.58\n)\n\n => Array\n(\n => 14.232.235.1\n)\n\n => Array\n(\n => 49.144.100.155\n)\n\n => Array\n(\n => 62.182.99.33\n)\n\n => Array\n(\n => 104.243.212.187\n)\n\n => Array\n(\n => 59.97.132.214\n)\n\n => Array\n(\n => 47.9.15.179\n)\n\n => Array\n(\n => 39.44.103.186\n)\n\n => Array\n(\n => 183.83.241.132\n)\n\n => Array\n(\n => 103.41.24.180\n)\n\n => Array\n(\n => 104.238.46.39\n)\n\n => Array\n(\n => 103.79.170.78\n)\n\n => Array\n(\n => 59.103.138.81\n)\n\n => Array\n(\n => 106.198.191.146\n)\n\n => Array\n(\n => 106.198.255.122\n)\n\n => Array\n(\n => 47.31.46.37\n)\n\n => Array\n(\n => 109.169.23.76\n)\n\n => Array\n(\n => 103.143.7.55\n)\n\n => Array\n(\n => 49.207.114.52\n)\n\n => Array\n(\n => 198.54.106.250\n)\n\n => Array\n(\n => 39.50.64.18\n)\n\n => Array\n(\n => 222.252.48.132\n)\n\n => Array\n(\n => 42.201.186.53\n)\n\n => Array\n(\n => 115.97.198.95\n)\n\n => Array\n(\n => 93.76.134.244\n)\n\n => Array\n(\n => 122.173.15.189\n)\n\n => Array\n(\n => 39.62.38.29\n)\n\n => Array\n(\n => 103.201.145.254\n)\n\n => Array\n(\n => 111.119.187.23\n)\n\n => Array\n(\n => 157.50.66.33\n)\n\n => Array\n(\n => 157.49.68.163\n)\n\n => Array\n(\n => 103.85.125.215\n)\n\n => Array\n(\n => 103.255.4.16\n)\n\n => Array\n(\n => 223.181.246.206\n)\n\n => Array\n(\n => 39.40.109.226\n)\n\n => Array\n(\n => 43.225.70.157\n)\n\n => Array\n(\n => 103.211.18.168\n)\n\n => Array\n(\n => 137.59.221.60\n)\n\n => Array\n(\n => 103.81.214.63\n)\n\n => Array\n(\n => 39.35.163.2\n)\n\n => Array\n(\n => 106.205.124.39\n)\n\n => Array\n(\n => 209.99.165.216\n)\n\n => Array\n(\n => 103.75.247.187\n)\n\n => Array\n(\n => 157.46.217.41\n)\n\n => Array\n(\n => 75.186.73.80\n)\n\n => Array\n(\n => 212.103.48.153\n)\n\n => Array\n(\n => 47.31.61.167\n)\n\n => Array\n(\n => 119.152.145.131\n)\n\n => Array\n(\n => 171.76.177.244\n)\n\n => Array\n(\n => 103.135.78.50\n)\n\n => Array\n(\n => 103.79.170.75\n)\n\n => Array\n(\n => 105.160.22.74\n)\n\n => Array\n(\n => 47.31.20.153\n)\n\n => Array\n(\n => 42.107.204.65\n)\n\n => Array\n(\n => 49.207.131.35\n)\n\n => Array\n(\n => 92.38.148.61\n)\n\n => Array\n(\n => 183.83.255.206\n)\n\n => Array\n(\n => 107.181.177.131\n)\n\n => Array\n(\n => 39.40.220.157\n)\n\n => Array\n(\n => 39.41.133.176\n)\n\n => Array\n(\n => 103.81.214.61\n)\n\n => Array\n(\n => 223.235.108.46\n)\n\n => Array\n(\n => 171.241.52.118\n)\n\n => Array\n(\n => 39.57.138.47\n)\n\n => Array\n(\n => 106.204.196.172\n)\n\n => Array\n(\n => 39.53.228.40\n)\n\n => Array\n(\n => 185.242.5.99\n)\n\n => Array\n(\n => 103.255.5.96\n)\n\n => Array\n(\n => 157.46.212.120\n)\n\n => Array\n(\n => 107.181.177.138\n)\n\n => Array\n(\n => 47.30.193.65\n)\n\n => Array\n(\n => 39.37.178.33\n)\n\n => Array\n(\n => 157.46.173.29\n)\n\n => Array\n(\n => 39.57.238.211\n)\n\n => Array\n(\n => 157.37.245.113\n)\n\n => Array\n(\n => 47.30.201.138\n)\n\n => Array\n(\n => 106.204.193.108\n)\n\n => Array\n(\n => 212.103.50.212\n)\n\n => Array\n(\n => 58.65.221.187\n)\n\n => Array\n(\n => 178.62.92.29\n)\n\n => Array\n(\n => 111.92.77.166\n)\n\n => Array\n(\n => 47.30.223.158\n)\n\n => Array\n(\n => 103.224.54.83\n)\n\n => Array\n(\n => 119.153.43.22\n)\n\n => Array\n(\n => 223.181.126.251\n)\n\n => Array\n(\n => 39.42.175.202\n)\n\n => Array\n(\n => 103.224.54.190\n)\n\n => Array\n(\n => 49.36.141.210\n)\n\n => Array\n(\n => 5.62.63.218\n)\n\n => Array\n(\n => 39.59.9.18\n)\n\n => Array\n(\n => 111.88.86.45\n)\n\n => Array\n(\n => 178.54.139.5\n)\n\n => Array\n(\n => 116.68.105.241\n)\n\n => Array\n(\n => 119.160.96.187\n)\n\n => Array\n(\n => 182.189.192.103\n)\n\n => Array\n(\n => 119.160.96.143\n)\n\n => Array\n(\n => 110.225.89.98\n)\n\n => Array\n(\n => 169.149.195.134\n)\n\n => Array\n(\n => 103.238.104.54\n)\n\n => Array\n(\n => 47.30.208.142\n)\n\n => Array\n(\n => 157.46.179.209\n)\n\n => Array\n(\n => 223.235.38.119\n)\n\n => Array\n(\n => 42.106.180.165\n)\n\n => Array\n(\n => 154.122.240.239\n)\n\n => Array\n(\n => 106.223.104.191\n)\n\n => Array\n(\n => 111.93.110.218\n)\n\n => Array\n(\n => 182.183.161.171\n)\n\n => Array\n(\n => 157.44.184.211\n)\n\n => Array\n(\n => 157.50.185.193\n)\n\n => Array\n(\n => 117.230.19.194\n)\n\n => Array\n(\n => 162.243.246.160\n)\n\n => Array\n(\n => 106.223.143.53\n)\n\n => Array\n(\n => 39.59.41.15\n)\n\n => Array\n(\n => 106.210.65.42\n)\n\n => Array\n(\n => 180.243.144.208\n)\n\n => Array\n(\n => 116.68.105.22\n)\n\n => Array\n(\n => 115.42.70.46\n)\n\n => Array\n(\n => 99.72.192.148\n)\n\n => Array\n(\n => 182.183.182.48\n)\n\n => Array\n(\n => 171.48.58.97\n)\n\n => Array\n(\n => 37.120.131.188\n)\n\n => Array\n(\n => 117.99.167.177\n)\n\n => Array\n(\n => 111.92.76.210\n)\n\n => Array\n(\n => 14.192.144.245\n)\n\n => Array\n(\n => 169.149.242.87\n)\n\n => Array\n(\n => 47.30.198.149\n)\n\n => Array\n(\n => 59.103.57.140\n)\n\n => Array\n(\n => 117.230.161.168\n)\n\n => Array\n(\n => 110.225.88.173\n)\n\n => Array\n(\n => 169.149.246.95\n)\n\n => Array\n(\n => 42.106.180.52\n)\n\n => Array\n(\n => 14.231.160.157\n)\n\n => Array\n(\n => 123.27.109.47\n)\n\n => Array\n(\n => 157.46.130.54\n)\n\n => Array\n(\n => 39.42.73.194\n)\n\n => Array\n(\n => 117.230.18.147\n)\n\n => Array\n(\n => 27.59.231.98\n)\n\n => Array\n(\n => 125.209.78.227\n)\n\n => Array\n(\n => 157.34.80.145\n)\n\n => Array\n(\n => 42.201.251.86\n)\n\n => Array\n(\n => 117.230.129.158\n)\n\n => Array\n(\n => 103.82.80.103\n)\n\n => Array\n(\n => 47.9.171.228\n)\n\n => Array\n(\n => 117.230.24.92\n)\n\n => Array\n(\n => 103.129.143.119\n)\n\n => Array\n(\n => 39.40.213.45\n)\n\n => Array\n(\n => 178.92.188.214\n)\n\n => Array\n(\n => 110.235.232.191\n)\n\n => Array\n(\n => 5.62.34.18\n)\n\n => Array\n(\n => 47.30.212.134\n)\n\n => Array\n(\n => 157.42.34.196\n)\n\n => Array\n(\n => 157.32.169.9\n)\n\n => Array\n(\n => 103.255.4.11\n)\n\n => Array\n(\n => 117.230.13.69\n)\n\n => Array\n(\n => 117.230.58.97\n)\n\n => Array\n(\n => 92.52.138.39\n)\n\n => Array\n(\n => 221.132.119.63\n)\n\n => Array\n(\n => 117.97.167.188\n)\n\n => Array\n(\n => 119.153.56.58\n)\n\n => Array\n(\n => 105.50.22.150\n)\n\n => Array\n(\n => 115.42.68.126\n)\n\n => Array\n(\n => 182.189.223.159\n)\n\n => Array\n(\n => 39.59.36.90\n)\n\n => Array\n(\n => 111.92.76.114\n)\n\n => Array\n(\n => 157.47.226.163\n)\n\n => Array\n(\n => 202.47.44.37\n)\n\n => Array\n(\n => 106.51.234.172\n)\n\n => Array\n(\n => 103.101.88.166\n)\n\n => Array\n(\n => 27.6.246.146\n)\n\n => Array\n(\n => 103.255.5.83\n)\n\n => Array\n(\n => 103.98.210.185\n)\n\n => Array\n(\n => 122.173.114.134\n)\n\n => Array\n(\n => 122.173.77.248\n)\n\n => Array\n(\n => 5.62.41.172\n)\n\n => Array\n(\n => 180.178.181.17\n)\n\n => Array\n(\n => 37.120.133.224\n)\n\n => Array\n(\n => 45.131.5.156\n)\n\n => Array\n(\n => 110.39.100.110\n)\n\n => Array\n(\n => 176.110.38.185\n)\n\n => Array\n(\n => 36.255.41.64\n)\n\n => Array\n(\n => 103.104.192.15\n)\n\n => Array\n(\n => 43.245.131.195\n)\n\n => Array\n(\n => 14.248.111.185\n)\n\n => Array\n(\n => 122.173.217.133\n)\n\n => Array\n(\n => 106.223.90.245\n)\n\n => Array\n(\n => 119.153.56.80\n)\n\n => Array\n(\n => 103.7.60.172\n)\n\n => Array\n(\n => 157.46.184.233\n)\n\n => Array\n(\n => 182.190.31.95\n)\n\n => Array\n(\n => 109.87.189.122\n)\n\n => Array\n(\n => 91.74.25.100\n)\n\n => Array\n(\n => 182.185.224.144\n)\n\n => Array\n(\n => 106.223.91.221\n)\n\n => Array\n(\n => 182.190.223.40\n)\n\n => Array\n(\n => 2.58.194.134\n)\n\n => Array\n(\n => 196.246.225.236\n)\n\n => Array\n(\n => 106.223.90.173\n)\n\n => Array\n(\n => 23.239.16.54\n)\n\n => Array\n(\n => 157.46.65.225\n)\n\n => Array\n(\n => 115.186.130.14\n)\n\n => Array\n(\n => 103.85.125.157\n)\n\n => Array\n(\n => 14.248.103.6\n)\n\n => Array\n(\n => 123.24.169.247\n)\n\n => Array\n(\n => 103.130.108.153\n)\n\n => Array\n(\n => 115.42.67.21\n)\n\n => Array\n(\n => 202.166.171.190\n)\n\n => Array\n(\n => 39.37.169.104\n)\n\n => Array\n(\n => 103.82.80.59\n)\n\n => Array\n(\n => 175.107.208.58\n)\n\n => Array\n(\n => 203.192.238.247\n)\n\n => Array\n(\n => 103.217.178.150\n)\n\n => Array\n(\n => 103.66.214.173\n)\n\n => Array\n(\n => 110.93.236.174\n)\n\n => Array\n(\n => 143.189.242.64\n)\n\n => Array\n(\n => 77.111.245.12\n)\n\n => Array\n(\n => 145.239.2.231\n)\n\n => Array\n(\n => 115.186.190.38\n)\n\n => Array\n(\n => 109.169.23.67\n)\n\n => Array\n(\n => 198.16.70.29\n)\n\n => Array\n(\n => 111.92.76.186\n)\n\n => Array\n(\n => 115.42.69.34\n)\n\n => Array\n(\n => 73.61.100.95\n)\n\n => Array\n(\n => 103.129.142.31\n)\n\n => Array\n(\n => 103.255.5.53\n)\n\n => Array\n(\n => 103.76.55.2\n)\n\n => Array\n(\n => 47.9.141.138\n)\n\n => Array\n(\n => 103.55.89.234\n)\n\n => Array\n(\n => 103.223.13.53\n)\n\n => Array\n(\n => 175.158.50.203\n)\n\n => Array\n(\n => 103.255.5.90\n)\n\n => Array\n(\n => 106.223.100.138\n)\n\n => Array\n(\n => 39.37.143.193\n)\n\n => Array\n(\n => 206.189.133.131\n)\n\n => Array\n(\n => 43.224.0.233\n)\n\n => Array\n(\n => 115.186.132.106\n)\n\n => Array\n(\n => 31.43.21.159\n)\n\n => Array\n(\n => 119.155.56.131\n)\n\n => Array\n(\n => 103.82.80.138\n)\n\n => Array\n(\n => 24.87.128.119\n)\n\n => Array\n(\n => 106.210.103.163\n)\n\n => Array\n(\n => 103.82.80.90\n)\n\n => Array\n(\n => 157.46.186.45\n)\n\n => Array\n(\n => 157.44.155.238\n)\n\n => Array\n(\n => 103.119.199.2\n)\n\n => Array\n(\n => 27.97.169.205\n)\n\n => Array\n(\n => 157.46.174.89\n)\n\n => Array\n(\n => 43.250.58.220\n)\n\n => Array\n(\n => 76.189.186.64\n)\n\n => Array\n(\n => 103.255.5.57\n)\n\n => Array\n(\n => 171.61.196.136\n)\n\n => Array\n(\n => 202.47.40.88\n)\n\n => Array\n(\n => 97.118.94.116\n)\n\n => Array\n(\n => 157.44.124.157\n)\n\n => Array\n(\n => 95.142.120.13\n)\n\n => Array\n(\n => 42.201.229.151\n)\n\n => Array\n(\n => 157.46.178.95\n)\n\n => Array\n(\n => 169.149.215.192\n)\n\n => Array\n(\n => 42.111.19.48\n)\n\n => Array\n(\n => 1.38.52.18\n)\n\n => Array\n(\n => 145.239.91.241\n)\n\n => Array\n(\n => 47.31.78.191\n)\n\n => Array\n(\n => 103.77.42.60\n)\n\n => Array\n(\n => 157.46.107.144\n)\n\n => Array\n(\n => 157.46.125.124\n)\n\n => Array\n(\n => 110.225.218.108\n)\n\n => Array\n(\n => 106.51.77.185\n)\n\n => Array\n(\n => 123.24.161.207\n)\n\n => Array\n(\n => 106.210.108.22\n)\n\n => Array\n(\n => 42.111.10.14\n)\n\n => Array\n(\n => 223.29.231.175\n)\n\n => Array\n(\n => 27.56.152.132\n)\n\n => Array\n(\n => 119.155.31.100\n)\n\n => Array\n(\n => 122.173.172.127\n)\n\n => Array\n(\n => 103.77.42.64\n)\n\n => Array\n(\n => 157.44.164.106\n)\n\n => Array\n(\n => 14.181.53.38\n)\n\n => Array\n(\n => 115.42.67.64\n)\n\n => Array\n(\n => 47.31.33.140\n)\n\n => Array\n(\n => 103.15.60.234\n)\n\n => Array\n(\n => 182.64.219.181\n)\n\n => Array\n(\n => 103.44.51.6\n)\n\n => Array\n(\n => 116.74.25.157\n)\n\n => Array\n(\n => 116.71.2.128\n)\n\n => Array\n(\n => 157.32.185.239\n)\n\n => Array\n(\n => 47.31.25.79\n)\n\n => Array\n(\n => 178.62.85.75\n)\n\n => Array\n(\n => 180.178.190.39\n)\n\n => Array\n(\n => 39.48.52.179\n)\n\n => Array\n(\n => 106.193.11.240\n)\n\n => Array\n(\n => 103.82.80.226\n)\n\n => Array\n(\n => 49.206.126.30\n)\n\n => Array\n(\n => 157.245.191.173\n)\n\n => Array\n(\n => 49.205.84.237\n)\n\n => Array\n(\n => 47.8.181.232\n)\n\n => Array\n(\n => 182.66.2.92\n)\n\n => Array\n(\n => 49.34.137.220\n)\n\n => Array\n(\n => 209.205.217.125\n)\n\n => Array\n(\n => 192.64.5.73\n)\n\n => Array\n(\n => 27.63.166.108\n)\n\n => Array\n(\n => 120.29.96.211\n)\n\n => Array\n(\n => 182.186.112.135\n)\n\n => Array\n(\n => 45.118.165.151\n)\n\n => Array\n(\n => 47.8.228.12\n)\n\n => Array\n(\n => 106.215.3.162\n)\n\n => Array\n(\n => 111.92.72.66\n)\n\n => Array\n(\n => 169.145.2.9\n)\n\n => Array\n(\n => 106.207.205.100\n)\n\n => Array\n(\n => 223.181.8.12\n)\n\n => Array\n(\n => 157.48.149.78\n)\n\n => Array\n(\n => 103.206.138.116\n)\n\n => Array\n(\n => 39.53.119.22\n)\n\n => Array\n(\n => 157.33.232.106\n)\n\n => Array\n(\n => 49.37.205.139\n)\n\n => Array\n(\n => 115.42.68.3\n)\n\n => Array\n(\n => 93.72.182.251\n)\n\n => Array\n(\n => 202.142.166.22\n)\n\n => Array\n(\n => 157.119.81.111\n)\n\n => Array\n(\n => 182.186.116.155\n)\n\n => Array\n(\n => 157.37.171.37\n)\n\n => Array\n(\n => 117.206.164.48\n)\n\n => Array\n(\n => 49.36.52.63\n)\n\n => Array\n(\n => 203.175.72.112\n)\n\n => Array\n(\n => 171.61.132.193\n)\n\n => Array\n(\n => 111.119.187.44\n)\n\n => Array\n(\n => 39.37.165.216\n)\n\n => Array\n(\n => 103.86.109.58\n)\n\n => Array\n(\n => 39.59.2.86\n)\n\n => Array\n(\n => 111.119.187.28\n)\n\n => Array\n(\n => 106.201.9.10\n)\n\n => Array\n(\n => 49.35.25.106\n)\n\n => Array\n(\n => 157.49.239.103\n)\n\n => Array\n(\n => 157.49.237.198\n)\n\n => Array\n(\n => 14.248.64.121\n)\n\n => Array\n(\n => 117.102.7.214\n)\n\n => Array\n(\n => 120.29.91.246\n)\n\n => Array\n(\n => 103.7.79.41\n)\n\n => Array\n(\n => 132.154.99.209\n)\n\n => Array\n(\n => 212.36.27.245\n)\n\n => Array\n(\n => 157.44.154.9\n)\n\n => Array\n(\n => 47.31.56.44\n)\n\n => Array\n(\n => 192.142.199.136\n)\n\n => Array\n(\n => 171.61.159.49\n)\n\n => Array\n(\n => 119.160.116.151\n)\n\n => Array\n(\n => 103.98.63.39\n)\n\n => Array\n(\n => 41.60.233.216\n)\n\n => Array\n(\n => 49.36.75.212\n)\n\n => Array\n(\n => 223.188.60.20\n)\n\n => Array\n(\n => 103.98.63.50\n)\n\n => Array\n(\n => 178.162.198.21\n)\n\n => Array\n(\n => 157.46.209.35\n)\n\n => Array\n(\n => 119.155.32.151\n)\n\n => Array\n(\n => 102.185.58.161\n)\n\n => Array\n(\n => 59.96.89.231\n)\n\n => Array\n(\n => 119.155.255.198\n)\n\n => Array\n(\n => 42.107.204.57\n)\n\n => Array\n(\n => 42.106.181.74\n)\n\n => Array\n(\n => 157.46.219.186\n)\n\n => Array\n(\n => 115.42.71.49\n)\n\n => Array\n(\n => 157.46.209.131\n)\n\n => Array\n(\n => 220.81.15.94\n)\n\n => Array\n(\n => 111.119.187.24\n)\n\n => Array\n(\n => 49.37.195.185\n)\n\n => Array\n(\n => 42.106.181.85\n)\n\n => Array\n(\n => 43.249.225.134\n)\n\n => Array\n(\n => 117.206.165.151\n)\n\n => Array\n(\n => 119.153.48.250\n)\n\n => Array\n(\n => 27.4.172.162\n)\n\n => Array\n(\n => 117.20.29.51\n)\n\n => Array\n(\n => 103.98.63.135\n)\n\n => Array\n(\n => 117.7.218.229\n)\n\n => Array\n(\n => 157.49.233.105\n)\n\n => Array\n(\n => 39.53.151.199\n)\n\n => Array\n(\n => 101.255.118.33\n)\n\n => Array\n(\n => 41.141.246.9\n)\n\n => Array\n(\n => 221.132.113.78\n)\n\n => Array\n(\n => 119.160.116.202\n)\n\n => Array\n(\n => 117.237.193.244\n)\n\n => Array\n(\n => 157.41.110.145\n)\n\n => Array\n(\n => 103.98.63.5\n)\n\n => Array\n(\n => 103.125.129.58\n)\n\n => Array\n(\n => 183.83.254.66\n)\n\n => Array\n(\n => 45.135.236.160\n)\n\n => Array\n(\n => 198.199.87.124\n)\n\n => Array\n(\n => 193.176.86.41\n)\n\n => Array\n(\n => 115.97.142.98\n)\n\n => Array\n(\n => 222.252.38.198\n)\n\n => Array\n(\n => 110.93.237.49\n)\n\n => Array\n(\n => 103.224.48.122\n)\n\n => Array\n(\n => 110.38.28.130\n)\n\n => Array\n(\n => 106.211.238.154\n)\n\n => Array\n(\n => 111.88.41.73\n)\n\n => Array\n(\n => 119.155.13.143\n)\n\n => Array\n(\n => 103.213.111.60\n)\n\n => Array\n(\n => 202.0.103.42\n)\n\n => Array\n(\n => 157.48.144.33\n)\n\n => Array\n(\n => 111.119.187.62\n)\n\n => Array\n(\n => 103.87.212.71\n)\n\n => Array\n(\n => 157.37.177.20\n)\n\n => Array\n(\n => 223.233.71.92\n)\n\n => Array\n(\n => 116.213.32.107\n)\n\n => Array\n(\n => 104.248.173.151\n)\n\n => Array\n(\n => 14.181.102.222\n)\n\n => Array\n(\n => 103.10.224.252\n)\n\n => Array\n(\n => 175.158.50.57\n)\n\n => Array\n(\n => 165.22.122.199\n)\n\n => Array\n(\n => 23.106.56.12\n)\n\n => Array\n(\n => 203.122.10.146\n)\n\n => Array\n(\n => 37.111.136.138\n)\n\n => Array\n(\n => 103.87.193.66\n)\n\n => Array\n(\n => 39.59.122.246\n)\n\n => Array\n(\n => 111.119.183.63\n)\n\n => Array\n(\n => 157.46.72.102\n)\n\n => Array\n(\n => 185.132.133.82\n)\n\n => Array\n(\n => 118.103.230.148\n)\n\n => Array\n(\n => 5.62.39.45\n)\n\n => Array\n(\n => 119.152.144.134\n)\n\n => Array\n(\n => 172.105.117.102\n)\n\n => Array\n(\n => 122.254.70.212\n)\n\n => Array\n(\n => 102.185.128.97\n)\n\n => Array\n(\n => 182.69.249.11\n)\n\n => Array\n(\n => 105.163.134.167\n)\n\n => Array\n(\n => 111.119.187.38\n)\n\n => Array\n(\n => 103.46.195.93\n)\n\n => Array\n(\n => 106.204.161.156\n)\n\n => Array\n(\n => 122.176.2.175\n)\n\n => Array\n(\n => 117.99.162.31\n)\n\n => Array\n(\n => 106.212.241.242\n)\n\n => Array\n(\n => 42.107.196.149\n)\n\n => Array\n(\n => 212.90.60.57\n)\n\n => Array\n(\n => 175.107.237.12\n)\n\n => Array\n(\n => 157.46.119.152\n)\n\n => Array\n(\n => 157.34.81.12\n)\n\n => Array\n(\n => 162.243.1.22\n)\n\n => Array\n(\n => 110.37.222.178\n)\n\n => Array\n(\n => 103.46.195.68\n)\n\n => Array\n(\n => 119.160.116.81\n)\n\n => Array\n(\n => 138.197.131.28\n)\n\n => Array\n(\n => 103.88.218.124\n)\n\n => Array\n(\n => 192.241.172.113\n)\n\n => Array\n(\n => 110.39.174.106\n)\n\n => Array\n(\n => 111.88.48.17\n)\n\n => Array\n(\n => 42.108.160.218\n)\n\n => Array\n(\n => 117.102.0.16\n)\n\n => Array\n(\n => 157.46.125.235\n)\n\n => Array\n(\n => 14.190.242.251\n)\n\n => Array\n(\n => 47.31.184.64\n)\n\n => Array\n(\n => 49.205.84.157\n)\n\n => Array\n(\n => 122.162.115.247\n)\n\n => Array\n(\n => 41.202.219.74\n)\n\n => Array\n(\n => 106.215.9.67\n)\n\n => Array\n(\n => 103.87.56.208\n)\n\n => Array\n(\n => 103.46.194.147\n)\n\n => Array\n(\n => 116.90.98.81\n)\n\n => Array\n(\n => 115.42.71.213\n)\n\n => Array\n(\n => 39.49.35.192\n)\n\n => Array\n(\n => 41.202.219.65\n)\n\n => Array\n(\n => 131.212.249.93\n)\n\n => Array\n(\n => 49.205.16.251\n)\n\n => Array\n(\n => 39.34.147.250\n)\n\n => Array\n(\n => 183.83.210.185\n)\n\n => Array\n(\n => 49.37.194.215\n)\n\n => Array\n(\n => 103.46.194.108\n)\n\n => Array\n(\n => 89.36.219.233\n)\n\n => Array\n(\n => 119.152.105.178\n)\n\n => Array\n(\n => 202.47.45.125\n)\n\n => Array\n(\n => 156.146.59.27\n)\n\n => Array\n(\n => 132.154.21.156\n)\n\n => Array\n(\n => 157.44.35.31\n)\n\n => Array\n(\n => 41.80.118.124\n)\n\n => Array\n(\n => 47.31.159.198\n)\n\n => Array\n(\n => 103.209.223.140\n)\n\n => Array\n(\n => 157.46.130.138\n)\n\n => Array\n(\n => 49.37.199.246\n)\n\n => Array\n(\n => 111.88.242.10\n)\n\n => Array\n(\n => 43.241.145.110\n)\n\n => Array\n(\n => 124.153.16.30\n)\n\n => Array\n(\n => 27.5.22.173\n)\n\n => Array\n(\n => 111.88.191.173\n)\n\n => Array\n(\n => 41.60.236.200\n)\n\n => Array\n(\n => 115.42.67.146\n)\n\n => Array\n(\n => 150.242.173.7\n)\n\n => Array\n(\n => 14.248.71.23\n)\n\n => Array\n(\n => 111.119.187.4\n)\n\n => Array\n(\n => 124.29.212.118\n)\n\n => Array\n(\n => 51.68.205.163\n)\n\n => Array\n(\n => 182.184.107.63\n)\n\n => Array\n(\n => 106.211.253.87\n)\n\n => Array\n(\n => 223.190.89.5\n)\n\n => Array\n(\n => 183.83.212.63\n)\n\n => Array\n(\n => 129.205.113.227\n)\n\n => Array\n(\n => 106.210.40.141\n)\n\n => Array\n(\n => 91.202.163.169\n)\n\n => Array\n(\n => 76.105.191.89\n)\n\n => Array\n(\n => 171.51.244.160\n)\n\n => Array\n(\n => 37.139.188.92\n)\n\n => Array\n(\n => 23.106.56.37\n)\n\n => Array\n(\n => 157.44.175.180\n)\n\n => Array\n(\n => 122.2.122.97\n)\n\n => Array\n(\n => 103.87.192.194\n)\n\n => Array\n(\n => 192.154.253.6\n)\n\n => Array\n(\n => 77.243.191.19\n)\n\n => Array\n(\n => 122.254.70.46\n)\n\n => Array\n(\n => 154.76.233.73\n)\n\n => Array\n(\n => 195.181.167.150\n)\n\n => Array\n(\n => 209.209.228.5\n)\n\n => Array\n(\n => 203.192.212.115\n)\n\n => Array\n(\n => 221.132.118.179\n)\n\n => Array\n(\n => 117.208.210.204\n)\n\n => Array\n(\n => 120.29.90.126\n)\n\n => Array\n(\n => 36.77.239.190\n)\n\n => Array\n(\n => 157.37.137.127\n)\n\n => Array\n(\n => 39.40.243.6\n)\n\n => Array\n(\n => 182.182.41.201\n)\n\n => Array\n(\n => 39.59.32.46\n)\n\n => Array\n(\n => 111.119.183.36\n)\n\n => Array\n(\n => 103.83.147.61\n)\n\n => Array\n(\n => 103.82.80.85\n)\n\n => Array\n(\n => 103.46.194.161\n)\n\n => Array\n(\n => 101.50.105.38\n)\n\n => Array\n(\n => 111.119.183.58\n)\n\n => Array\n(\n => 47.9.234.51\n)\n\n => Array\n(\n => 120.29.86.157\n)\n\n => Array\n(\n => 175.158.50.70\n)\n\n => Array\n(\n => 112.196.163.235\n)\n\n => Array\n(\n => 139.167.161.85\n)\n\n => Array\n(\n => 106.207.39.181\n)\n\n => Array\n(\n => 103.77.42.159\n)\n\n => Array\n(\n => 185.56.138.220\n)\n\n => Array\n(\n => 119.155.33.205\n)\n\n => Array\n(\n => 157.42.117.124\n)\n\n => Array\n(\n => 103.117.202.202\n)\n\n => Array\n(\n => 220.253.101.109\n)\n\n => Array\n(\n => 49.37.7.247\n)\n\n => Array\n(\n => 119.160.65.27\n)\n\n => Array\n(\n => 114.122.21.151\n)\n\n => Array\n(\n => 157.44.141.83\n)\n\n => Array\n(\n => 103.131.9.7\n)\n\n => Array\n(\n => 125.99.222.21\n)\n\n => Array\n(\n => 103.238.104.206\n)\n\n => Array\n(\n => 110.93.227.100\n)\n\n => Array\n(\n => 49.14.119.114\n)\n\n => Array\n(\n => 115.186.189.82\n)\n\n => Array\n(\n => 106.201.194.2\n)\n\n => Array\n(\n => 106.204.227.28\n)\n\n => Array\n(\n => 47.31.206.13\n)\n\n => Array\n(\n => 39.42.144.109\n)\n\n => Array\n(\n => 14.253.254.90\n)\n\n => Array\n(\n => 157.44.142.118\n)\n\n => Array\n(\n => 192.142.176.21\n)\n\n => Array\n(\n => 103.217.178.225\n)\n\n => Array\n(\n => 106.78.78.16\n)\n\n => Array\n(\n => 167.71.63.184\n)\n\n => Array\n(\n => 207.244.71.82\n)\n\n => Array\n(\n => 71.105.25.145\n)\n\n => Array\n(\n => 39.51.250.30\n)\n\n => Array\n(\n => 157.41.120.160\n)\n\n => Array\n(\n => 39.37.137.81\n)\n\n => Array\n(\n => 41.80.237.27\n)\n\n => Array\n(\n => 111.119.187.50\n)\n\n => Array\n(\n => 49.145.224.252\n)\n\n => Array\n(\n => 106.197.28.106\n)\n\n => Array\n(\n => 103.217.178.240\n)\n\n => Array\n(\n => 27.97.182.237\n)\n\n => Array\n(\n => 106.211.253.72\n)\n\n => Array\n(\n => 119.152.154.172\n)\n\n => Array\n(\n => 103.255.151.148\n)\n\n => Array\n(\n => 154.157.80.12\n)\n\n => Array\n(\n => 156.146.59.28\n)\n\n => Array\n(\n => 171.61.211.64\n)\n\n => Array\n(\n => 27.76.59.22\n)\n\n => Array\n(\n => 167.99.92.124\n)\n\n => Array\n(\n => 132.154.94.51\n)\n\n => Array\n(\n => 111.119.183.38\n)\n\n => Array\n(\n => 115.42.70.169\n)\n\n => Array\n(\n => 109.169.23.83\n)\n\n => Array\n(\n => 157.46.213.64\n)\n\n => Array\n(\n => 39.37.179.171\n)\n\n => Array\n(\n => 14.232.233.32\n)\n\n => Array\n(\n => 157.49.226.13\n)\n\n => Array\n(\n => 185.209.178.78\n)\n\n => Array\n(\n => 222.252.46.230\n)\n\n => Array\n(\n => 139.5.255.168\n)\n\n => Array\n(\n => 202.8.118.12\n)\n\n => Array\n(\n => 39.53.205.63\n)\n\n => Array\n(\n => 157.37.167.227\n)\n\n => Array\n(\n => 157.49.237.121\n)\n\n => Array\n(\n => 208.89.99.6\n)\n\n => Array\n(\n => 111.119.187.33\n)\n\n => Array\n(\n => 39.37.132.101\n)\n\n => Array\n(\n => 72.255.61.15\n)\n\n => Array\n(\n => 157.41.69.126\n)\n\n => Array\n(\n => 27.6.193.15\n)\n\n => Array\n(\n => 157.41.104.8\n)\n\n => Array\n(\n => 157.41.97.162\n)\n\n => Array\n(\n => 95.136.91.67\n)\n\n => Array\n(\n => 110.93.209.138\n)\n\n => Array\n(\n => 119.152.154.82\n)\n\n => Array\n(\n => 111.88.239.223\n)\n\n => Array\n(\n => 157.230.62.100\n)\n\n => Array\n(\n => 37.111.136.167\n)\n\n => Array\n(\n => 139.167.162.65\n)\n\n => Array\n(\n => 120.29.72.72\n)\n\n => Array\n(\n => 39.42.169.69\n)\n\n => Array\n(\n => 157.49.247.12\n)\n\n => Array\n(\n => 43.231.58.221\n)\n\n => Array\n(\n => 111.88.229.18\n)\n\n => Array\n(\n => 171.79.185.198\n)\n\n => Array\n(\n => 169.149.193.102\n)\n\n => Array\n(\n => 207.244.89.162\n)\n\n => Array\n(\n => 27.4.217.129\n)\n\n => Array\n(\n => 91.236.184.12\n)\n\n => Array\n(\n => 14.192.154.150\n)\n\n => Array\n(\n => 167.172.55.253\n)\n\n => Array\n(\n => 103.77.42.192\n)\n\n => Array\n(\n => 39.59.122.140\n)\n\n => Array\n(\n => 41.80.84.46\n)\n\n => Array\n(\n => 202.47.52.115\n)\n\n => Array\n(\n => 222.252.43.47\n)\n\n => Array\n(\n => 119.155.37.250\n)\n\n => Array\n(\n => 157.41.18.88\n)\n\n => Array\n(\n => 39.42.8.59\n)\n\n => Array\n(\n => 39.45.162.110\n)\n\n => Array\n(\n => 111.88.237.25\n)\n\n => Array\n(\n => 103.76.211.168\n)\n\n => Array\n(\n => 178.137.114.165\n)\n\n => Array\n(\n => 43.225.74.146\n)\n\n => Array\n(\n => 157.42.25.26\n)\n\n => Array\n(\n => 137.59.146.63\n)\n\n => Array\n(\n => 119.160.117.190\n)\n\n => Array\n(\n => 1.186.181.133\n)\n\n => Array\n(\n => 39.42.145.94\n)\n\n => Array\n(\n => 203.175.73.96\n)\n\n => Array\n(\n => 39.37.160.14\n)\n\n => Array\n(\n => 157.39.123.250\n)\n\n => Array\n(\n => 95.135.57.82\n)\n\n => Array\n(\n => 162.210.194.35\n)\n\n => Array\n(\n => 39.42.153.135\n)\n\n => Array\n(\n => 118.103.230.106\n)\n\n => Array\n(\n => 108.61.39.115\n)\n\n => Array\n(\n => 102.7.108.45\n)\n\n => Array\n(\n => 183.83.138.134\n)\n\n => Array\n(\n => 115.186.70.223\n)\n\n => Array\n(\n => 157.34.17.139\n)\n\n => Array\n(\n => 122.166.158.231\n)\n\n => Array\n(\n => 43.227.135.90\n)\n\n => Array\n(\n => 182.68.46.180\n)\n\n => Array\n(\n => 223.225.28.138\n)\n\n => Array\n(\n => 103.77.42.220\n)\n\n => Array\n(\n => 192.241.219.13\n)\n\n => Array\n(\n => 103.82.80.113\n)\n\n => Array\n(\n => 42.111.243.151\n)\n\n => Array\n(\n => 171.79.189.247\n)\n\n => Array\n(\n => 157.32.132.102\n)\n\n => Array\n(\n => 103.130.105.243\n)\n\n => Array\n(\n => 117.223.98.120\n)\n\n => Array\n(\n => 106.215.197.187\n)\n\n => Array\n(\n => 182.190.194.179\n)\n\n => Array\n(\n => 223.225.29.42\n)\n\n => Array\n(\n => 117.222.94.151\n)\n\n => Array\n(\n => 182.185.199.104\n)\n\n => Array\n(\n => 49.36.145.77\n)\n\n => Array\n(\n => 103.82.80.73\n)\n\n => Array\n(\n => 103.77.16.13\n)\n\n => Array\n(\n => 221.132.118.86\n)\n\n => Array\n(\n => 202.47.45.77\n)\n\n => Array\n(\n => 202.8.118.116\n)\n\n => Array\n(\n => 42.106.180.185\n)\n\n => Array\n(\n => 203.122.8.234\n)\n\n => Array\n(\n => 88.230.104.245\n)\n\n => Array\n(\n => 103.131.9.33\n)\n\n => Array\n(\n => 117.207.209.60\n)\n\n => Array\n(\n => 42.111.253.227\n)\n\n => Array\n(\n => 23.106.56.54\n)\n\n => Array\n(\n => 122.178.143.181\n)\n\n => Array\n(\n => 111.88.180.5\n)\n\n => Array\n(\n => 174.55.224.161\n)\n\n => Array\n(\n => 49.205.87.100\n)\n\n => Array\n(\n => 49.34.183.118\n)\n\n => Array\n(\n => 124.155.255.154\n)\n\n => Array\n(\n => 106.212.135.200\n)\n\n => Array\n(\n => 139.99.159.11\n)\n\n => Array\n(\n => 45.135.229.8\n)\n\n => Array\n(\n => 88.230.106.85\n)\n\n => Array\n(\n => 91.153.145.221\n)\n\n => Array\n(\n => 103.95.83.33\n)\n\n => Array\n(\n => 122.178.116.76\n)\n\n => Array\n(\n => 103.135.78.14\n)\n\n => Array\n(\n => 111.88.233.206\n)\n\n => Array\n(\n => 192.140.153.210\n)\n\n => Array\n(\n => 202.8.118.69\n)\n\n => Array\n(\n => 103.83.130.81\n)\n\n => Array\n(\n => 182.190.213.143\n)\n\n => Array\n(\n => 198.16.74.204\n)\n\n => Array\n(\n => 101.128.117.248\n)\n\n => Array\n(\n => 103.108.5.147\n)\n\n => Array\n(\n => 157.32.130.158\n)\n\n => Array\n(\n => 103.244.172.93\n)\n\n => Array\n(\n => 47.30.140.126\n)\n\n => Array\n(\n => 223.188.40.124\n)\n\n => Array\n(\n => 157.44.191.102\n)\n\n => Array\n(\n => 41.60.237.62\n)\n\n => Array\n(\n => 47.31.228.161\n)\n\n => Array\n(\n => 137.59.217.188\n)\n\n => Array\n(\n => 39.53.220.237\n)\n\n => Array\n(\n => 45.127.45.199\n)\n\n => Array\n(\n => 14.190.71.19\n)\n\n => Array\n(\n => 47.18.205.54\n)\n\n => Array\n(\n => 110.93.240.11\n)\n\n => Array\n(\n => 134.209.29.111\n)\n\n => Array\n(\n => 49.36.175.104\n)\n\n => Array\n(\n => 203.192.230.61\n)\n\n => Array\n(\n => 176.10.125.115\n)\n\n => Array\n(\n => 182.18.206.17\n)\n\n => Array\n(\n => 103.87.194.102\n)\n\n => Array\n(\n => 171.79.123.106\n)\n\n => Array\n(\n => 45.116.233.35\n)\n\n => Array\n(\n => 223.190.57.225\n)\n\n => Array\n(\n => 114.125.6.158\n)\n\n => Array\n(\n => 223.179.138.176\n)\n\n => Array\n(\n => 111.119.183.61\n)\n\n => Array\n(\n => 202.8.118.43\n)\n\n => Array\n(\n => 157.51.175.216\n)\n\n => Array\n(\n => 41.60.238.100\n)\n\n => Array\n(\n => 117.207.210.199\n)\n\n => Array\n(\n => 111.119.183.26\n)\n\n => Array\n(\n => 103.252.226.12\n)\n\n => Array\n(\n => 103.221.208.82\n)\n\n => Array\n(\n => 103.82.80.228\n)\n\n => Array\n(\n => 111.119.187.39\n)\n\n => Array\n(\n => 157.51.161.199\n)\n\n => Array\n(\n => 59.96.88.246\n)\n\n => Array\n(\n => 27.4.181.183\n)\n\n => Array\n(\n => 43.225.98.124\n)\n\n => Array\n(\n => 157.51.113.74\n)\n\n => Array\n(\n => 207.244.89.161\n)\n\n => Array\n(\n => 49.37.184.82\n)\n\n => Array\n(\n => 111.119.183.4\n)\n\n => Array\n(\n => 39.42.130.147\n)\n\n => Array\n(\n => 103.152.101.2\n)\n\n => Array\n(\n => 111.119.183.2\n)\n\n => Array\n(\n => 157.51.171.149\n)\n\n => Array\n(\n => 103.82.80.245\n)\n\n => Array\n(\n => 175.107.207.133\n)\n\n => Array\n(\n => 103.204.169.158\n)\n\n => Array\n(\n => 157.51.181.12\n)\n\n => Array\n(\n => 195.158.193.212\n)\n\n => Array\n(\n => 204.14.73.85\n)\n\n => Array\n(\n => 39.59.59.31\n)\n\n => Array\n(\n => 45.148.11.82\n)\n\n => Array\n(\n => 157.46.117.250\n)\n\n => Array\n(\n => 157.46.127.170\n)\n\n => Array\n(\n => 77.247.181.165\n)\n\n => Array\n(\n => 111.119.183.54\n)\n\n => Array\n(\n => 41.60.232.183\n)\n\n => Array\n(\n => 157.42.206.174\n)\n\n => Array\n(\n => 196.53.10.246\n)\n\n => Array\n(\n => 27.97.186.131\n)\n\n => Array\n(\n => 103.73.101.134\n)\n\n => Array\n(\n => 111.119.183.35\n)\n\n => Array\n(\n => 202.8.118.111\n)\n\n => Array\n(\n => 103.75.246.207\n)\n\n => Array\n(\n => 47.8.94.225\n)\n\n => Array\n(\n => 106.202.40.83\n)\n\n => Array\n(\n => 117.102.2.0\n)\n\n => Array\n(\n => 156.146.59.11\n)\n\n => Array\n(\n => 223.190.115.125\n)\n\n => Array\n(\n => 169.149.212.232\n)\n\n => Array\n(\n => 39.45.150.127\n)\n\n => Array\n(\n => 45.63.10.204\n)\n\n => Array\n(\n => 27.57.86.46\n)\n\n => Array\n(\n => 103.127.20.138\n)\n\n => Array\n(\n => 223.190.27.26\n)\n\n => Array\n(\n => 49.15.248.78\n)\n\n => Array\n(\n => 130.105.135.103\n)\n\n => Array\n(\n => 47.31.3.239\n)\n\n => Array\n(\n => 185.66.71.8\n)\n\n => Array\n(\n => 103.226.226.198\n)\n\n => Array\n(\n => 39.34.134.16\n)\n\n => Array\n(\n => 95.158.53.120\n)\n\n => Array\n(\n => 45.9.249.246\n)\n\n => Array\n(\n => 223.235.162.157\n)\n\n => Array\n(\n => 37.111.139.23\n)\n\n => Array\n(\n => 49.37.153.47\n)\n\n => Array\n(\n => 103.242.60.205\n)\n\n => Array\n(\n => 185.66.68.18\n)\n\n => Array\n(\n => 162.221.202.138\n)\n\n => Array\n(\n => 202.63.195.29\n)\n\n => Array\n(\n => 112.198.75.226\n)\n\n => Array\n(\n => 46.200.69.233\n)\n\n => Array\n(\n => 103.135.78.30\n)\n\n => Array\n(\n => 119.152.226.9\n)\n\n => Array\n(\n => 167.172.242.50\n)\n\n => Array\n(\n => 49.36.151.31\n)\n\n => Array\n(\n => 111.88.237.156\n)\n\n => Array\n(\n => 103.215.168.1\n)\n\n => Array\n(\n => 107.181.177.137\n)\n\n => Array\n(\n => 157.119.186.202\n)\n\n => Array\n(\n => 37.111.139.106\n)\n\n => Array\n(\n => 182.180.152.198\n)\n\n => Array\n(\n => 43.248.153.72\n)\n\n => Array\n(\n => 64.188.20.84\n)\n\n => Array\n(\n => 103.92.214.11\n)\n\n => Array\n(\n => 182.182.14.148\n)\n\n => Array\n(\n => 116.75.154.119\n)\n\n => Array\n(\n => 37.228.235.94\n)\n\n => Array\n(\n => 197.210.55.43\n)\n\n => Array\n(\n => 45.118.165.153\n)\n\n => Array\n(\n => 122.176.32.27\n)\n\n => Array\n(\n => 106.215.161.20\n)\n\n => Array\n(\n => 152.32.113.58\n)\n\n => Array\n(\n => 111.125.106.132\n)\n\n => Array\n(\n => 212.102.40.72\n)\n\n => Array\n(\n => 2.58.194.140\n)\n\n => Array\n(\n => 122.174.68.115\n)\n\n => Array\n(\n => 117.241.66.56\n)\n\n => Array\n(\n => 71.94.172.140\n)\n\n => Array\n(\n => 103.209.228.139\n)\n\n => Array\n(\n => 43.242.177.140\n)\n\n => Array\n(\n => 38.91.101.66\n)\n\n => Array\n(\n => 103.82.80.67\n)\n\n => Array\n(\n => 117.248.62.138\n)\n\n => Array\n(\n => 103.81.215.51\n)\n\n => Array\n(\n => 103.253.174.4\n)\n\n => Array\n(\n => 202.142.110.111\n)\n\n => Array\n(\n => 162.216.142.1\n)\n\n => Array\n(\n => 58.186.7.252\n)\n\n => Array\n(\n => 113.203.247.66\n)\n\n => Array\n(\n => 111.88.50.63\n)\n\n => Array\n(\n => 182.182.94.227\n)\n\n => Array\n(\n => 49.15.232.50\n)\n\n => Array\n(\n => 182.189.76.225\n)\n\n => Array\n(\n => 139.99.159.14\n)\n\n => Array\n(\n => 163.172.159.235\n)\n\n => Array\n(\n => 157.36.235.241\n)\n\n => Array\n(\n => 111.119.187.3\n)\n\n => Array\n(\n => 103.100.4.61\n)\n\n => Array\n(\n => 192.142.130.88\n)\n\n => Array\n(\n => 43.242.176.114\n)\n\n => Array\n(\n => 180.178.156.165\n)\n\n => Array\n(\n => 182.189.236.77\n)\n\n => Array\n(\n => 49.34.197.239\n)\n\n => Array\n(\n => 157.36.107.107\n)\n\n => Array\n(\n => 103.209.85.175\n)\n\n => Array\n(\n => 203.139.63.83\n)\n\n => Array\n(\n => 43.242.177.161\n)\n\n => Array\n(\n => 182.182.77.138\n)\n\n => Array\n(\n => 114.124.168.117\n)\n\n => Array\n(\n => 124.253.79.191\n)\n\n => Array\n(\n => 192.142.168.235\n)\n\n => Array\n(\n => 14.232.235.111\n)\n\n => Array\n(\n => 152.57.124.214\n)\n\n => Array\n(\n => 123.24.172.48\n)\n\n => Array\n(\n => 43.242.176.87\n)\n\n => Array\n(\n => 43.242.176.101\n)\n\n => Array\n(\n => 49.156.84.110\n)\n\n => Array\n(\n => 58.65.222.6\n)\n\n => Array\n(\n => 157.32.189.112\n)\n\n => Array\n(\n => 47.31.155.87\n)\n\n => Array\n(\n => 39.53.244.182\n)\n\n => Array\n(\n => 39.33.221.76\n)\n\n => Array\n(\n => 161.35.130.245\n)\n\n => Array\n(\n => 152.32.113.137\n)\n\n => Array\n(\n => 192.142.187.220\n)\n\n => Array\n(\n => 185.54.228.123\n)\n\n => Array\n(\n => 103.233.87.221\n)\n\n => Array\n(\n => 223.236.200.224\n)\n\n => Array\n(\n => 27.97.189.170\n)\n\n => Array\n(\n => 103.82.80.212\n)\n\n => Array\n(\n => 43.242.176.37\n)\n\n => Array\n(\n => 49.36.144.94\n)\n\n => Array\n(\n => 180.251.62.185\n)\n\n => Array\n(\n => 39.50.243.227\n)\n\n => Array\n(\n => 124.253.20.21\n)\n\n => Array\n(\n => 41.60.233.31\n)\n\n => Array\n(\n => 103.81.215.57\n)\n\n => Array\n(\n => 185.91.120.16\n)\n\n => Array\n(\n => 182.190.107.163\n)\n\n => Array\n(\n => 222.252.61.68\n)\n\n => Array\n(\n => 109.169.23.78\n)\n\n => Array\n(\n => 39.50.151.222\n)\n\n => Array\n(\n => 43.242.176.86\n)\n\n => Array\n(\n => 178.162.222.161\n)\n\n => Array\n(\n => 37.111.139.158\n)\n\n => Array\n(\n => 39.57.224.97\n)\n\n => Array\n(\n => 39.57.157.194\n)\n\n => Array\n(\n => 111.119.183.48\n)\n\n => Array\n(\n => 180.190.171.129\n)\n\n => Array\n(\n => 39.52.174.177\n)\n\n => Array\n(\n => 43.242.176.103\n)\n\n => Array\n(\n => 124.253.83.14\n)\n\n => Array\n(\n => 182.189.116.245\n)\n\n => Array\n(\n => 157.36.178.213\n)\n\n => Array\n(\n => 45.250.65.119\n)\n\n => Array\n(\n => 103.209.86.6\n)\n\n => Array\n(\n => 43.242.176.80\n)\n\n => Array\n(\n => 137.59.147.2\n)\n\n => Array\n(\n => 117.222.95.23\n)\n\n => Array\n(\n => 124.253.81.10\n)\n\n => Array\n(\n => 43.242.177.21\n)\n\n => Array\n(\n => 182.189.224.186\n)\n\n => Array\n(\n => 39.52.178.142\n)\n\n => Array\n(\n => 106.214.29.176\n)\n\n => Array\n(\n => 111.88.145.107\n)\n\n => Array\n(\n => 49.36.142.67\n)\n\n => Array\n(\n => 202.142.65.50\n)\n\n => Array\n(\n => 1.22.186.76\n)\n\n => Array\n(\n => 103.131.8.225\n)\n\n => Array\n(\n => 39.53.212.111\n)\n\n => Array\n(\n => 103.82.80.149\n)\n\n => Array\n(\n => 43.242.176.12\n)\n\n => Array\n(\n => 103.109.13.189\n)\n\n => Array\n(\n => 124.253.206.202\n)\n\n => Array\n(\n => 117.195.115.85\n)\n\n => Array\n(\n => 49.36.245.229\n)\n\n => Array\n(\n => 42.118.8.100\n)\n\n => Array\n(\n => 1.22.73.17\n)\n\n => Array\n(\n => 157.36.166.131\n)\n\n => Array\n(\n => 182.182.38.223\n)\n\n => Array\n(\n => 49.14.150.21\n)\n\n => Array\n(\n => 43.242.176.89\n)\n\n => Array\n(\n => 157.46.185.69\n)\n\n => Array\n(\n => 103.31.92.150\n)\n\n => Array\n(\n => 59.96.90.94\n)\n\n => Array\n(\n => 49.156.111.64\n)\n\n => Array\n(\n => 103.75.244.16\n)\n\n => Array\n(\n => 54.37.18.139\n)\n\n => Array\n(\n => 27.255.173.50\n)\n\n => Array\n(\n => 84.202.161.120\n)\n\n => Array\n(\n => 27.3.224.180\n)\n\n => Array\n(\n => 39.44.14.192\n)\n\n => Array\n(\n => 37.120.133.201\n)\n\n => Array\n(\n => 109.251.143.236\n)\n\n => Array\n(\n => 23.80.97.111\n)\n\n => Array\n(\n => 43.242.176.9\n)\n\n => Array\n(\n => 14.248.107.50\n)\n\n => Array\n(\n => 182.189.221.114\n)\n\n => Array\n(\n => 103.253.173.74\n)\n\n => Array\n(\n => 27.97.177.45\n)\n\n => Array\n(\n => 49.14.98.9\n)\n\n => Array\n(\n => 163.53.85.169\n)\n\n => Array\n(\n => 39.59.90.168\n)\n\n => Array\n(\n => 111.88.202.253\n)\n\n => Array\n(\n => 111.119.178.155\n)\n\n => Array\n(\n => 171.76.163.75\n)\n\n => Array\n(\n => 202.5.154.23\n)\n\n => Array\n(\n => 119.160.65.164\n)\n\n => Array\n(\n => 14.253.253.190\n)\n\n => Array\n(\n => 117.206.167.25\n)\n\n => Array\n(\n => 61.2.183.186\n)\n\n => Array\n(\n => 103.100.4.83\n)\n\n => Array\n(\n => 124.253.71.126\n)\n\n => Array\n(\n => 182.189.49.217\n)\n\n => Array\n(\n => 103.196.160.41\n)\n\n => Array\n(\n => 23.106.56.35\n)\n\n => Array\n(\n => 110.38.12.70\n)\n\n => Array\n(\n => 154.157.199.239\n)\n\n => Array\n(\n => 14.231.163.113\n)\n\n => Array\n(\n => 103.69.27.232\n)\n\n => Array\n(\n => 175.107.220.192\n)\n\n => Array\n(\n => 43.231.58.173\n)\n\n => Array\n(\n => 138.128.91.215\n)\n\n => Array\n(\n => 103.233.86.1\n)\n\n => Array\n(\n => 182.187.67.111\n)\n\n => Array\n(\n => 49.156.71.31\n)\n\n => Array\n(\n => 27.255.174.125\n)\n\n => Array\n(\n => 195.24.220.35\n)\n\n => Array\n(\n => 120.29.98.28\n)\n\n => Array\n(\n => 41.202.219.255\n)\n\n => Array\n(\n => 103.88.3.243\n)\n\n => Array\n(\n => 111.125.106.75\n)\n\n => Array\n(\n => 106.76.71.74\n)\n\n => Array\n(\n => 112.201.138.85\n)\n\n => Array\n(\n => 110.137.101.229\n)\n\n => Array\n(\n => 43.242.177.96\n)\n\n => Array\n(\n => 39.36.198.196\n)\n\n => Array\n(\n => 27.255.181.140\n)\n\n => Array\n(\n => 194.99.104.58\n)\n\n => Array\n(\n => 78.129.139.109\n)\n\n => Array\n(\n => 47.247.185.67\n)\n\n => Array\n(\n => 27.63.37.90\n)\n\n => Array\n(\n => 103.211.54.1\n)\n\n => Array\n(\n => 94.202.167.139\n)\n\n => Array\n(\n => 111.119.183.3\n)\n\n => Array\n(\n => 124.253.194.1\n)\n\n => Array\n(\n => 192.142.188.115\n)\n\n => Array\n(\n => 39.44.137.107\n)\n\n => Array\n(\n => 43.251.191.25\n)\n\n => Array\n(\n => 103.140.30.114\n)\n\n => Array\n(\n => 117.5.194.159\n)\n\n => Array\n(\n => 109.169.23.79\n)\n\n => Array\n(\n => 122.178.127.170\n)\n\n => Array\n(\n => 45.118.165.156\n)\n\n => Array\n(\n => 39.48.199.148\n)\n\n => Array\n(\n => 182.64.138.32\n)\n\n => Array\n(\n => 37.73.129.186\n)\n\n => Array\n(\n => 182.186.110.35\n)\n\n => Array\n(\n => 43.242.177.24\n)\n\n => Array\n(\n => 119.155.23.112\n)\n\n => Array\n(\n => 84.16.238.119\n)\n\n => Array\n(\n => 41.202.219.252\n)\n\n => Array\n(\n => 43.242.176.119\n)\n\n => Array\n(\n => 111.119.187.6\n)\n\n => Array\n(\n => 95.12.200.188\n)\n\n => Array\n(\n => 139.28.219.138\n)\n\n => Array\n(\n => 89.163.247.130\n)\n\n => Array\n(\n => 122.173.103.88\n)\n\n => Array\n(\n => 103.248.87.10\n)\n\n => Array\n(\n => 23.106.249.36\n)\n\n => Array\n(\n => 124.253.94.125\n)\n\n => Array\n(\n => 39.53.244.147\n)\n\n => Array\n(\n => 193.109.85.11\n)\n\n => Array\n(\n => 43.242.176.71\n)\n\n => Array\n(\n => 43.242.177.58\n)\n\n => Array\n(\n => 47.31.6.139\n)\n\n => Array\n(\n => 39.59.34.67\n)\n\n => Array\n(\n => 43.242.176.58\n)\n\n => Array\n(\n => 103.107.198.198\n)\n\n => Array\n(\n => 147.135.11.113\n)\n\n => Array\n(\n => 27.7.212.112\n)\n\n => Array\n(\n => 43.242.177.1\n)\n\n => Array\n(\n => 175.107.227.27\n)\n\n => Array\n(\n => 103.103.43.254\n)\n\n => Array\n(\n => 49.15.221.10\n)\n\n => Array\n(\n => 43.242.177.43\n)\n\n => Array\n(\n => 36.85.59.11\n)\n\n => Array\n(\n => 124.253.204.50\n)\n\n => Array\n(\n => 5.181.233.54\n)\n\n => Array\n(\n => 43.242.177.154\n)\n\n => Array\n(\n => 103.84.37.169\n)\n\n => Array\n(\n => 222.252.54.108\n)\n\n => Array\n(\n => 14.162.160.254\n)\n\n => Array\n(\n => 178.151.218.45\n)\n\n => Array\n(\n => 110.137.101.93\n)\n\n => Array\n(\n => 122.162.212.59\n)\n\n => Array\n(\n => 81.12.118.162\n)\n\n => Array\n(\n => 171.76.186.148\n)\n\n => Array\n(\n => 182.69.253.77\n)\n\n => Array\n(\n => 111.119.183.43\n)\n\n => Array\n(\n => 49.149.74.226\n)\n\n => Array\n(\n => 43.242.177.63\n)\n\n => Array\n(\n => 14.99.243.54\n)\n\n => Array\n(\n => 110.137.100.25\n)\n\n => Array\n(\n => 116.107.25.163\n)\n\n => Array\n(\n => 49.36.71.141\n)\n\n => Array\n(\n => 182.180.117.219\n)\n\n => Array\n(\n => 150.242.172.194\n)\n\n => Array\n(\n => 49.156.111.40\n)\n\n => Array\n(\n => 49.15.208.115\n)\n\n => Array\n(\n => 103.209.87.219\n)\n\n => Array\n(\n => 43.242.176.56\n)\n\n => Array\n(\n => 103.132.187.100\n)\n\n => Array\n(\n => 49.156.96.120\n)\n\n => Array\n(\n => 192.142.176.171\n)\n\n => Array\n(\n => 51.91.18.131\n)\n\n => Array\n(\n => 103.83.144.121\n)\n\n => Array\n(\n => 1.39.75.72\n)\n\n => Array\n(\n => 14.231.172.177\n)\n\n => Array\n(\n => 94.232.213.159\n)\n\n => Array\n(\n => 103.228.158.38\n)\n\n => Array\n(\n => 43.242.177.100\n)\n\n => Array\n(\n => 171.76.149.130\n)\n\n => Array\n(\n => 113.183.26.59\n)\n\n => Array\n(\n => 182.74.232.166\n)\n\n => Array\n(\n => 47.31.205.211\n)\n\n => Array\n(\n => 106.211.253.70\n)\n\n => Array\n(\n => 39.51.233.214\n)\n\n => Array\n(\n => 182.70.249.161\n)\n\n => Array\n(\n => 222.252.40.196\n)\n\n => Array\n(\n => 49.37.6.29\n)\n\n => Array\n(\n => 119.155.33.170\n)\n\n => Array\n(\n => 43.242.177.79\n)\n\n => Array\n(\n => 111.119.183.62\n)\n\n => Array\n(\n => 137.59.226.97\n)\n\n => Array\n(\n => 42.111.18.121\n)\n\n => Array\n(\n => 223.190.46.91\n)\n\n => Array\n(\n => 45.118.165.159\n)\n\n => Array\n(\n => 110.136.60.44\n)\n\n => Array\n(\n => 43.242.176.57\n)\n\n => Array\n(\n => 117.212.58.0\n)\n\n => Array\n(\n => 49.37.7.66\n)\n\n => Array\n(\n => 39.52.174.33\n)\n\n => Array\n(\n => 150.242.172.55\n)\n\n => Array\n(\n => 103.94.111.236\n)\n\n => Array\n(\n => 106.215.239.184\n)\n\n => Array\n(\n => 101.128.117.75\n)\n\n => Array\n(\n => 162.210.194.10\n)\n\n => Array\n(\n => 136.158.31.132\n)\n\n => Array\n(\n => 39.51.245.69\n)\n\n => Array\n(\n => 39.42.149.159\n)\n\n => Array\n(\n => 51.77.108.159\n)\n\n => Array\n(\n => 45.127.247.250\n)\n\n => Array\n(\n => 122.172.78.22\n)\n\n => Array\n(\n => 117.220.208.38\n)\n\n => Array\n(\n => 112.201.138.95\n)\n\n => Array\n(\n => 49.145.105.113\n)\n\n => Array\n(\n => 110.93.247.12\n)\n\n => Array\n(\n => 39.52.150.32\n)\n\n => Array\n(\n => 122.161.89.41\n)\n\n => Array\n(\n => 39.52.176.49\n)\n\n => Array\n(\n => 157.33.12.154\n)\n\n => Array\n(\n => 73.111.248.162\n)\n\n => Array\n(\n => 112.204.167.67\n)\n\n => Array\n(\n => 107.150.30.182\n)\n\n => Array\n(\n => 115.99.222.229\n)\n\n => Array\n(\n => 180.190.195.96\n)\n\n => Array\n(\n => 157.44.57.255\n)\n\n => Array\n(\n => 39.37.9.167\n)\n\n => Array\n(\n => 39.49.48.33\n)\n\n => Array\n(\n => 157.44.218.118\n)\n\n => Array\n(\n => 103.211.54.253\n)\n\n => Array\n(\n => 43.242.177.81\n)\n\n => Array\n(\n => 103.111.224.227\n)\n\n => Array\n(\n => 223.176.48.237\n)\n\n => Array\n(\n => 124.253.87.117\n)\n\n => Array\n(\n => 124.29.247.14\n)\n\n => Array\n(\n => 182.189.232.32\n)\n\n => Array\n(\n => 111.68.97.206\n)\n\n => Array\n(\n => 103.117.15.70\n)\n\n => Array\n(\n => 182.18.236.101\n)\n\n => Array\n(\n => 43.242.177.60\n)\n\n => Array\n(\n => 180.190.7.178\n)\n\n => Array\n(\n => 112.201.142.95\n)\n\n => Array\n(\n => 122.178.255.123\n)\n\n => Array\n(\n => 49.36.240.103\n)\n\n => Array\n(\n => 210.56.16.13\n)\n\n => Array\n(\n => 103.91.123.219\n)\n\n => Array\n(\n => 39.52.155.252\n)\n\n => Array\n(\n => 192.142.207.230\n)\n\n => Array\n(\n => 188.163.82.179\n)\n\n => Array\n(\n => 182.189.9.196\n)\n\n => Array\n(\n => 175.107.221.51\n)\n\n => Array\n(\n => 39.53.221.200\n)\n\n => Array\n(\n => 27.255.190.59\n)\n\n => Array\n(\n => 183.83.212.118\n)\n\n => Array\n(\n => 45.118.165.143\n)\n\n => Array\n(\n => 182.189.124.35\n)\n\n => Array\n(\n => 203.101.186.1\n)\n\n => Array\n(\n => 49.36.246.25\n)\n\n => Array\n(\n => 39.42.186.234\n)\n\n => Array\n(\n => 103.82.80.14\n)\n\n => Array\n(\n => 210.18.182.42\n)\n\n => Array\n(\n => 42.111.13.81\n)\n\n => Array\n(\n => 46.200.69.240\n)\n\n => Array\n(\n => 103.209.87.213\n)\n\n => Array\n(\n => 103.31.95.95\n)\n\n => Array\n(\n => 180.190.174.25\n)\n\n => Array\n(\n => 103.77.0.128\n)\n\n => Array\n(\n => 49.34.103.82\n)\n\n => Array\n(\n => 39.48.196.22\n)\n\n => Array\n(\n => 192.142.166.20\n)\n\n => Array\n(\n => 202.142.110.186\n)\n\n => Array\n(\n => 122.163.135.95\n)\n\n => Array\n(\n => 183.83.255.225\n)\n\n => Array\n(\n => 157.45.46.10\n)\n\n => Array\n(\n => 182.189.4.77\n)\n\n => Array\n(\n => 49.145.104.71\n)\n\n => Array\n(\n => 103.143.7.34\n)\n\n => Array\n(\n => 61.2.180.15\n)\n\n => Array\n(\n => 103.81.215.61\n)\n\n => Array\n(\n => 115.42.71.122\n)\n\n => Array\n(\n => 124.253.73.20\n)\n\n => Array\n(\n => 49.33.210.169\n)\n\n => Array\n(\n => 78.159.101.115\n)\n\n => Array\n(\n => 42.111.17.221\n)\n\n => Array\n(\n => 43.242.178.67\n)\n\n => Array\n(\n => 36.68.138.36\n)\n\n => Array\n(\n => 103.195.201.51\n)\n\n => Array\n(\n => 79.141.162.81\n)\n\n => Array\n(\n => 202.8.118.239\n)\n\n => Array\n(\n => 103.139.128.161\n)\n\n => Array\n(\n => 207.244.71.84\n)\n\n => Array\n(\n => 124.253.184.45\n)\n\n => Array\n(\n => 111.125.106.124\n)\n\n => Array\n(\n => 111.125.105.139\n)\n\n => Array\n(\n => 39.59.94.233\n)\n\n => Array\n(\n => 112.211.60.168\n)\n\n => Array\n(\n => 103.117.14.72\n)\n\n => Array\n(\n => 111.119.183.56\n)\n\n => Array\n(\n => 47.31.53.228\n)\n\n => Array\n(\n => 124.253.186.8\n)\n\n => Array\n(\n => 183.83.213.214\n)\n\n => Array\n(\n => 103.106.239.70\n)\n\n => Array\n(\n => 182.182.92.81\n)\n\n => Array\n(\n => 14.162.167.98\n)\n\n => Array\n(\n => 112.211.11.107\n)\n\n => Array\n(\n => 77.111.246.20\n)\n\n => Array\n(\n => 49.156.86.182\n)\n\n => Array\n(\n => 47.29.122.112\n)\n\n => Array\n(\n => 125.99.74.42\n)\n\n => Array\n(\n => 124.123.169.24\n)\n\n => Array\n(\n => 106.202.105.128\n)\n\n => Array\n(\n => 103.244.173.14\n)\n\n => Array\n(\n => 103.98.63.104\n)\n\n => Array\n(\n => 180.245.6.60\n)\n\n => Array\n(\n => 49.149.96.14\n)\n\n => Array\n(\n => 14.177.120.169\n)\n\n => Array\n(\n => 192.135.90.145\n)\n\n => Array\n(\n => 223.190.18.218\n)\n\n => Array\n(\n => 171.61.190.2\n)\n\n => Array\n(\n => 58.65.220.219\n)\n\n => Array\n(\n => 122.177.29.87\n)\n\n => Array\n(\n => 223.236.175.203\n)\n\n => Array\n(\n => 39.53.237.106\n)\n\n => Array\n(\n => 1.186.114.83\n)\n\n => Array\n(\n => 43.230.66.153\n)\n\n => Array\n(\n => 27.96.94.247\n)\n\n => Array\n(\n => 39.52.176.185\n)\n\n => Array\n(\n => 59.94.147.62\n)\n\n => Array\n(\n => 119.160.117.10\n)\n\n => Array\n(\n => 43.241.146.105\n)\n\n => Array\n(\n => 39.59.87.75\n)\n\n => Array\n(\n => 119.160.118.203\n)\n\n => Array\n(\n => 39.52.161.76\n)\n\n => Array\n(\n => 202.168.84.189\n)\n\n => Array\n(\n => 103.215.168.2\n)\n\n => Array\n(\n => 39.42.146.160\n)\n\n => Array\n(\n => 182.182.30.246\n)\n\n => Array\n(\n => 122.173.212.133\n)\n\n => Array\n(\n => 39.51.238.44\n)\n\n => Array\n(\n => 183.83.252.51\n)\n\n => Array\n(\n => 202.142.168.86\n)\n\n => Array\n(\n => 39.40.198.209\n)\n\n => Array\n(\n => 192.135.90.151\n)\n\n => Array\n(\n => 72.255.41.174\n)\n\n => Array\n(\n => 137.97.92.124\n)\n\n => Array\n(\n => 182.185.159.155\n)\n\n => Array\n(\n => 157.44.133.131\n)\n\n => Array\n(\n => 39.51.230.253\n)\n\n => Array\n(\n => 103.70.87.200\n)\n\n => Array\n(\n => 103.117.15.82\n)\n\n => Array\n(\n => 103.217.244.69\n)\n\n => Array\n(\n => 157.34.76.185\n)\n\n => Array\n(\n => 39.52.130.163\n)\n\n => Array\n(\n => 182.181.41.39\n)\n\n => Array\n(\n => 49.37.212.226\n)\n\n => Array\n(\n => 119.160.117.100\n)\n\n => Array\n(\n => 103.209.87.43\n)\n\n => Array\n(\n => 180.190.195.45\n)\n\n => Array\n(\n => 122.160.57.230\n)\n\n => Array\n(\n => 203.192.213.81\n)\n\n => Array\n(\n => 182.181.63.91\n)\n\n => Array\n(\n => 157.44.184.5\n)\n\n => Array\n(\n => 27.97.213.128\n)\n\n => Array\n(\n => 122.55.252.145\n)\n\n => Array\n(\n => 103.117.15.92\n)\n\n => Array\n(\n => 42.201.251.179\n)\n\n => Array\n(\n => 122.186.84.53\n)\n\n => Array\n(\n => 119.157.75.242\n)\n\n => Array\n(\n => 39.42.163.6\n)\n\n => Array\n(\n => 14.99.246.78\n)\n\n => Array\n(\n => 103.209.87.227\n)\n\n => Array\n(\n => 182.68.215.31\n)\n\n => Array\n(\n => 45.118.165.140\n)\n\n => Array\n(\n => 207.244.71.81\n)\n\n => Array\n(\n => 27.97.162.57\n)\n\n => Array\n(\n => 103.113.106.98\n)\n\n => Array\n(\n => 95.135.44.103\n)\n\n => Array\n(\n => 125.209.114.238\n)\n\n => Array\n(\n => 77.123.14.176\n)\n\n => Array\n(\n => 110.36.202.169\n)\n\n => Array\n(\n => 124.253.205.230\n)\n\n => Array\n(\n => 106.215.72.117\n)\n\n => Array\n(\n => 116.72.226.35\n)\n\n => Array\n(\n => 137.97.103.141\n)\n\n => Array\n(\n => 112.79.212.161\n)\n\n => Array\n(\n => 103.209.85.150\n)\n\n => Array\n(\n => 103.159.127.6\n)\n\n => Array\n(\n => 43.239.205.66\n)\n\n => Array\n(\n => 143.244.51.152\n)\n\n => Array\n(\n => 182.64.15.3\n)\n\n => Array\n(\n => 182.185.207.146\n)\n\n => Array\n(\n => 45.118.165.155\n)\n\n => Array\n(\n => 115.160.241.214\n)\n\n => Array\n(\n => 47.31.230.68\n)\n\n => Array\n(\n => 49.15.84.145\n)\n\n => Array\n(\n => 39.51.239.206\n)\n\n => Array\n(\n => 103.149.154.212\n)\n\n => Array\n(\n => 43.239.207.155\n)\n\n => Array\n(\n => 182.182.30.181\n)\n\n => Array\n(\n => 157.37.198.16\n)\n\n => Array\n(\n => 162.239.24.60\n)\n\n => Array\n(\n => 106.212.101.97\n)\n\n => Array\n(\n => 124.253.97.44\n)\n\n => Array\n(\n => 106.214.95.176\n)\n\n => Array\n(\n => 102.69.228.114\n)\n\n => Array\n(\n => 116.74.58.221\n)\n\n => Array\n(\n => 162.210.194.38\n)\n\n => Array\n(\n => 39.52.162.121\n)\n\n => Array\n(\n => 103.216.143.255\n)\n\n => Array\n(\n => 103.49.155.134\n)\n\n => Array\n(\n => 182.191.119.236\n)\n\n => Array\n(\n => 111.88.213.172\n)\n\n => Array\n(\n => 43.239.207.207\n)\n\n => Array\n(\n => 140.213.35.143\n)\n\n => Array\n(\n => 154.72.153.215\n)\n\n => Array\n(\n => 122.170.47.36\n)\n\n => Array\n(\n => 51.158.111.163\n)\n\n => Array\n(\n => 203.122.10.150\n)\n\n => Array\n(\n => 47.31.176.111\n)\n\n => Array\n(\n => 103.75.246.34\n)\n\n => Array\n(\n => 103.244.178.45\n)\n\n => Array\n(\n => 182.185.138.0\n)\n\n => Array\n(\n => 183.83.254.224\n)\n\n => Array\n(\n => 49.36.246.145\n)\n\n => Array\n(\n => 202.47.60.85\n)\n\n => Array\n(\n => 180.190.163.160\n)\n\n => Array\n(\n => 27.255.187.221\n)\n\n => Array\n(\n => 14.248.94.2\n)\n\n => Array\n(\n => 185.233.17.187\n)\n\n => Array\n(\n => 139.5.254.227\n)\n\n => Array\n(\n => 103.149.160.66\n)\n\n => Array\n(\n => 122.168.235.47\n)\n\n => Array\n(\n => 45.113.248.224\n)\n\n => Array\n(\n => 110.54.170.142\n)\n\n => Array\n(\n => 223.235.226.55\n)\n\n => Array\n(\n => 157.32.19.235\n)\n\n => Array\n(\n => 49.15.221.114\n)\n\n => Array\n(\n => 27.97.166.163\n)\n\n => Array\n(\n => 223.233.99.5\n)\n\n => Array\n(\n => 49.33.203.53\n)\n\n => Array\n(\n => 27.56.214.41\n)\n\n => Array\n(\n => 103.138.51.3\n)\n\n => Array\n(\n => 111.119.183.21\n)\n\n => Array\n(\n => 47.15.138.233\n)\n\n => Array\n(\n => 202.63.213.184\n)\n\n => Array\n(\n => 49.36.158.94\n)\n\n => Array\n(\n => 27.97.186.179\n)\n\n => Array\n(\n => 27.97.214.69\n)\n\n => Array\n(\n => 203.128.18.163\n)\n\n => Array\n(\n => 106.207.235.63\n)\n\n => Array\n(\n => 116.107.220.231\n)\n\n => Array\n(\n => 223.226.169.249\n)\n\n => Array\n(\n => 106.201.24.6\n)\n\n => Array\n(\n => 49.15.89.7\n)\n\n => Array\n(\n => 49.15.142.20\n)\n\n => Array\n(\n => 223.177.24.85\n)\n\n => Array\n(\n => 37.156.17.37\n)\n\n => Array\n(\n => 102.129.224.2\n)\n\n => Array\n(\n => 49.15.85.221\n)\n\n => Array\n(\n => 106.76.208.153\n)\n\n => Array\n(\n => 61.2.47.71\n)\n\n => Array\n(\n => 27.97.178.79\n)\n\n => Array\n(\n => 39.34.143.196\n)\n\n => Array\n(\n => 103.10.227.158\n)\n\n => Array\n(\n => 117.220.210.159\n)\n\n => Array\n(\n => 182.189.28.11\n)\n\n => Array\n(\n => 122.185.38.170\n)\n\n => Array\n(\n => 112.196.132.115\n)\n\n => Array\n(\n => 187.156.137.83\n)\n\n => Array\n(\n => 203.122.3.88\n)\n\n => Array\n(\n => 51.68.142.45\n)\n\n => Array\n(\n => 124.253.217.55\n)\n\n => Array\n(\n => 103.152.41.2\n)\n\n => Array\n(\n => 157.37.154.219\n)\n\n => Array\n(\n => 39.45.32.77\n)\n\n => Array\n(\n => 182.182.22.221\n)\n\n => Array\n(\n => 157.43.205.117\n)\n\n => Array\n(\n => 202.142.123.58\n)\n\n => Array\n(\n => 43.239.207.121\n)\n\n => Array\n(\n => 49.206.122.113\n)\n\n => Array\n(\n => 106.193.199.203\n)\n\n => Array\n(\n => 103.67.157.251\n)\n\n => Array\n(\n => 49.34.97.81\n)\n\n => Array\n(\n => 49.156.92.130\n)\n\n => Array\n(\n => 203.160.179.210\n)\n\n => Array\n(\n => 106.215.33.244\n)\n\n => Array\n(\n => 191.101.148.41\n)\n\n => Array\n(\n => 203.90.94.94\n)\n\n => Array\n(\n => 105.129.205.134\n)\n\n => Array\n(\n => 106.215.45.165\n)\n\n => Array\n(\n => 112.196.132.15\n)\n\n => Array\n(\n => 39.59.64.174\n)\n\n => Array\n(\n => 124.253.155.116\n)\n\n => Array\n(\n => 94.179.192.204\n)\n\n => Array\n(\n => 110.38.29.245\n)\n\n => Array\n(\n => 124.29.209.78\n)\n\n => Array\n(\n => 103.75.245.240\n)\n\n => Array\n(\n => 49.36.159.170\n)\n\n => Array\n(\n => 223.190.18.160\n)\n\n => Array\n(\n => 124.253.113.226\n)\n\n => Array\n(\n => 14.180.77.240\n)\n\n => Array\n(\n => 106.215.76.24\n)\n\n => Array\n(\n => 106.210.155.153\n)\n\n => Array\n(\n => 111.119.187.42\n)\n\n => Array\n(\n => 146.196.32.106\n)\n\n => Array\n(\n => 122.162.22.27\n)\n\n => Array\n(\n => 49.145.59.252\n)\n\n => Array\n(\n => 95.47.247.92\n)\n\n => Array\n(\n => 103.99.218.50\n)\n\n => Array\n(\n => 157.37.192.88\n)\n\n => Array\n(\n => 82.102.31.242\n)\n\n => Array\n(\n => 157.46.220.64\n)\n\n => Array\n(\n => 180.151.107.52\n)\n\n => Array\n(\n => 203.81.240.75\n)\n\n => Array\n(\n => 122.167.213.130\n)\n\n => Array\n(\n => 103.227.70.164\n)\n\n => Array\n(\n => 106.215.81.169\n)\n\n => Array\n(\n => 157.46.214.170\n)\n\n => Array\n(\n => 103.69.27.163\n)\n\n => Array\n(\n => 124.253.23.213\n)\n\n => Array\n(\n => 157.37.167.174\n)\n\n => Array\n(\n => 1.39.204.67\n)\n\n => Array\n(\n => 112.196.132.51\n)\n\n => Array\n(\n => 119.152.61.222\n)\n\n => Array\n(\n => 47.31.36.174\n)\n\n => Array\n(\n => 47.31.152.174\n)\n\n => Array\n(\n => 49.34.18.105\n)\n\n => Array\n(\n => 157.37.170.101\n)\n\n => Array\n(\n => 118.209.241.234\n)\n\n => Array\n(\n => 103.67.19.9\n)\n\n => Array\n(\n => 182.189.14.154\n)\n\n => Array\n(\n => 45.127.233.232\n)\n\n => Array\n(\n => 27.96.94.91\n)\n\n => Array\n(\n => 183.83.214.250\n)\n\n => Array\n(\n => 47.31.27.140\n)\n\n => Array\n(\n => 47.31.129.199\n)\n\n => Array\n(\n => 157.44.156.111\n)\n\n => Array\n(\n => 42.110.163.2\n)\n\n => Array\n(\n => 124.253.64.210\n)\n\n => Array\n(\n => 49.36.167.54\n)\n\n => Array\n(\n => 27.63.135.145\n)\n\n => Array\n(\n => 157.35.254.63\n)\n\n => Array\n(\n => 39.45.18.182\n)\n\n => Array\n(\n => 197.210.85.102\n)\n\n => Array\n(\n => 112.196.132.90\n)\n\n => Array\n(\n => 59.152.97.84\n)\n\n => Array\n(\n => 43.242.178.7\n)\n\n => Array\n(\n => 47.31.40.70\n)\n\n => Array\n(\n => 202.134.10.136\n)\n\n => Array\n(\n => 132.154.241.43\n)\n\n => Array\n(\n => 185.209.179.240\n)\n\n => Array\n(\n => 202.47.50.28\n)\n\n => Array\n(\n => 182.186.1.29\n)\n\n => Array\n(\n => 124.253.114.229\n)\n\n => Array\n(\n => 49.32.210.126\n)\n\n => Array\n(\n => 43.242.178.122\n)\n\n => Array\n(\n => 42.111.28.52\n)\n\n => Array\n(\n => 23.227.141.44\n)\n\n => Array\n(\n => 23.227.141.156\n)\n\n => Array\n(\n => 103.253.173.79\n)\n\n => Array\n(\n => 116.75.231.74\n)\n\n => Array\n(\n => 106.76.78.196\n)\n\n => Array\n(\n => 116.75.197.68\n)\n\n => Array\n(\n => 42.108.172.131\n)\n\n => Array\n(\n => 157.38.27.199\n)\n\n => Array\n(\n => 103.70.86.205\n)\n\n => Array\n(\n => 119.152.63.239\n)\n\n => Array\n(\n => 103.233.116.94\n)\n\n => Array\n(\n => 111.119.188.17\n)\n\n => Array\n(\n => 103.196.160.156\n)\n\n => Array\n(\n => 27.97.208.40\n)\n\n => Array\n(\n => 188.163.7.136\n)\n\n => Array\n(\n => 49.15.202.205\n)\n\n => Array\n(\n => 124.253.201.111\n)\n\n => Array\n(\n => 182.190.213.246\n)\n\n => Array\n(\n => 5.154.174.10\n)\n\n => Array\n(\n => 103.21.185.16\n)\n\n => Array\n(\n => 112.196.132.67\n)\n\n => Array\n(\n => 49.15.194.230\n)\n\n => Array\n(\n => 103.118.34.103\n)\n\n => Array\n(\n => 49.15.201.92\n)\n\n => Array\n(\n => 42.111.13.238\n)\n\n => Array\n(\n => 203.192.213.137\n)\n\n => Array\n(\n => 45.115.190.82\n)\n\n => Array\n(\n => 78.26.130.102\n)\n\n => Array\n(\n => 49.15.85.202\n)\n\n => Array\n(\n => 106.76.193.33\n)\n\n => Array\n(\n => 103.70.41.30\n)\n\n => Array\n(\n => 103.82.78.254\n)\n\n => Array\n(\n => 110.38.35.90\n)\n\n => Array\n(\n => 181.214.107.27\n)\n\n => Array\n(\n => 27.110.183.162\n)\n\n => Array\n(\n => 94.225.230.215\n)\n\n => Array\n(\n => 27.97.185.58\n)\n\n => Array\n(\n => 49.146.196.124\n)\n\n => Array\n(\n => 119.157.76.144\n)\n\n => Array\n(\n => 103.99.218.34\n)\n\n => Array\n(\n => 185.32.221.247\n)\n\n => Array\n(\n => 27.97.161.12\n)\n\n => Array\n(\n => 27.62.144.214\n)\n\n => Array\n(\n => 124.253.90.151\n)\n\n => Array\n(\n => 49.36.135.69\n)\n\n => Array\n(\n => 39.40.217.106\n)\n\n => Array\n(\n => 119.152.235.136\n)\n\n => Array\n(\n => 103.91.103.226\n)\n\n => Array\n(\n => 117.222.226.93\n)\n\n => Array\n(\n => 182.190.24.126\n)\n\n => Array\n(\n => 27.97.223.179\n)\n\n => Array\n(\n => 202.137.115.11\n)\n\n => Array\n(\n => 43.242.178.130\n)\n\n => Array\n(\n => 182.189.125.232\n)\n\n => Array\n(\n => 182.190.202.87\n)\n\n => Array\n(\n => 124.253.102.193\n)\n\n => Array\n(\n => 103.75.247.73\n)\n\n => Array\n(\n => 122.177.100.97\n)\n\n => Array\n(\n => 47.31.192.254\n)\n\n => Array\n(\n => 49.149.73.185\n)\n\n => Array\n(\n => 39.57.147.197\n)\n\n => Array\n(\n => 103.110.147.52\n)\n\n => Array\n(\n => 124.253.106.255\n)\n\n => Array\n(\n => 152.57.116.136\n)\n\n => Array\n(\n => 110.38.35.102\n)\n\n => Array\n(\n => 182.18.206.127\n)\n\n => Array\n(\n => 103.133.59.246\n)\n\n => Array\n(\n => 27.97.189.139\n)\n\n => Array\n(\n => 179.61.245.54\n)\n\n => Array\n(\n => 103.240.233.176\n)\n\n => Array\n(\n => 111.88.124.196\n)\n\n => Array\n(\n => 49.146.215.3\n)\n\n => Array\n(\n => 110.39.10.246\n)\n\n => Array\n(\n => 27.5.42.135\n)\n\n => Array\n(\n => 27.97.177.251\n)\n\n => Array\n(\n => 93.177.75.254\n)\n\n => Array\n(\n => 43.242.177.3\n)\n\n => Array\n(\n => 112.196.132.97\n)\n\n => Array\n(\n => 116.75.242.188\n)\n\n => Array\n(\n => 202.8.118.101\n)\n\n => Array\n(\n => 49.36.65.43\n)\n\n => Array\n(\n => 157.37.146.220\n)\n\n => Array\n(\n => 157.37.143.235\n)\n\n => Array\n(\n => 157.38.94.34\n)\n\n => Array\n(\n => 49.36.131.1\n)\n\n => Array\n(\n => 132.154.92.97\n)\n\n => Array\n(\n => 132.154.123.115\n)\n\n => Array\n(\n => 49.15.197.222\n)\n\n => Array\n(\n => 124.253.198.72\n)\n\n => Array\n(\n => 27.97.217.95\n)\n\n => Array\n(\n => 47.31.194.65\n)\n\n => Array\n(\n => 197.156.190.156\n)\n\n => Array\n(\n => 197.156.190.230\n)\n\n => Array\n(\n => 103.62.152.250\n)\n\n => Array\n(\n => 103.152.212.126\n)\n\n => Array\n(\n => 185.233.18.177\n)\n\n => Array\n(\n => 116.75.63.83\n)\n\n => Array\n(\n => 157.38.56.125\n)\n\n => Array\n(\n => 119.157.107.195\n)\n\n => Array\n(\n => 103.87.50.73\n)\n\n => Array\n(\n => 95.142.120.141\n)\n\n => Array\n(\n => 154.13.1.221\n)\n\n => Array\n(\n => 103.147.87.79\n)\n\n => Array\n(\n => 39.53.173.186\n)\n\n => Array\n(\n => 195.114.145.107\n)\n\n => Array\n(\n => 157.33.201.185\n)\n\n => Array\n(\n => 195.85.219.36\n)\n\n => Array\n(\n => 105.161.67.127\n)\n\n => Array\n(\n => 110.225.87.77\n)\n\n => Array\n(\n => 103.95.167.236\n)\n\n => Array\n(\n => 89.187.162.213\n)\n\n => Array\n(\n => 27.255.189.50\n)\n\n => Array\n(\n => 115.96.77.54\n)\n\n => Array\n(\n => 223.182.220.223\n)\n\n => Array\n(\n => 157.47.206.192\n)\n\n => Array\n(\n => 182.186.110.226\n)\n\n => Array\n(\n => 39.53.243.237\n)\n\n => Array\n(\n => 39.40.228.58\n)\n\n => Array\n(\n => 157.38.60.9\n)\n\n => Array\n(\n => 106.198.244.189\n)\n\n => Array\n(\n => 124.253.51.164\n)\n\n => Array\n(\n => 49.147.113.58\n)\n\n => Array\n(\n => 14.231.196.229\n)\n\n => Array\n(\n => 103.81.214.152\n)\n\n => Array\n(\n => 117.222.220.60\n)\n\n => Array\n(\n => 83.142.111.213\n)\n\n => Array\n(\n => 14.224.77.147\n)\n\n => Array\n(\n => 110.235.236.95\n)\n\n => Array\n(\n => 103.26.83.30\n)\n\n => Array\n(\n => 106.206.191.82\n)\n\n => Array\n(\n => 103.49.117.135\n)\n\n => Array\n(\n => 202.47.39.9\n)\n\n => Array\n(\n => 180.178.145.205\n)\n\n => Array\n(\n => 43.251.93.119\n)\n\n => Array\n(\n => 27.6.212.182\n)\n\n => Array\n(\n => 39.42.156.20\n)\n\n => Array\n(\n => 47.31.141.195\n)\n\n => Array\n(\n => 157.37.146.73\n)\n\n => Array\n(\n => 49.15.93.155\n)\n\n => Array\n(\n => 162.210.194.37\n)\n\n => Array\n(\n => 223.188.160.236\n)\n\n => Array\n(\n => 47.9.90.158\n)\n\n => Array\n(\n => 49.15.85.224\n)\n\n => Array\n(\n => 49.15.93.134\n)\n\n => Array\n(\n => 107.179.244.94\n)\n\n => Array\n(\n => 182.190.203.90\n)\n\n => Array\n(\n => 185.192.69.203\n)\n\n => Array\n(\n => 185.17.27.99\n)\n\n => Array\n(\n => 119.160.116.182\n)\n\n => Array\n(\n => 203.99.177.25\n)\n\n => Array\n(\n => 162.228.207.248\n)\n\n => Array\n(\n => 47.31.245.69\n)\n\n => Array\n(\n => 49.15.210.159\n)\n\n => Array\n(\n => 42.111.2.112\n)\n\n => Array\n(\n => 223.186.116.79\n)\n\n => Array\n(\n => 103.225.176.143\n)\n\n => Array\n(\n => 45.115.190.49\n)\n\n => Array\n(\n => 115.42.71.105\n)\n\n => Array\n(\n => 157.51.11.157\n)\n\n => Array\n(\n => 14.175.56.186\n)\n\n => Array\n(\n => 59.153.16.7\n)\n\n => Array\n(\n => 106.202.84.144\n)\n\n => Array\n(\n => 27.6.242.91\n)\n\n => Array\n(\n => 47.11.112.107\n)\n\n => Array\n(\n => 106.207.54.187\n)\n\n => Array\n(\n => 124.253.196.121\n)\n\n => Array\n(\n => 51.79.161.244\n)\n\n => Array\n(\n => 103.41.24.100\n)\n\n => Array\n(\n => 195.66.79.32\n)\n\n => Array\n(\n => 117.196.127.42\n)\n\n => Array\n(\n => 103.75.247.197\n)\n\n => Array\n(\n => 89.187.162.107\n)\n\n => Array\n(\n => 223.238.154.49\n)\n\n => Array\n(\n => 117.223.99.139\n)\n\n => Array\n(\n => 103.87.59.134\n)\n\n => Array\n(\n => 124.253.212.30\n)\n\n => Array\n(\n => 202.47.62.55\n)\n\n => Array\n(\n => 47.31.219.128\n)\n\n => Array\n(\n => 49.14.121.72\n)\n\n => Array\n(\n => 124.253.212.189\n)\n\n => Array\n(\n => 103.244.179.24\n)\n\n => Array\n(\n => 182.190.213.92\n)\n\n => Array\n(\n => 43.242.178.51\n)\n\n => Array\n(\n => 180.92.138.54\n)\n\n => Array\n(\n => 111.119.187.26\n)\n\n => Array\n(\n => 49.156.111.31\n)\n\n => Array\n(\n => 27.63.108.183\n)\n\n => Array\n(\n => 27.58.184.79\n)\n\n => Array\n(\n => 39.40.225.130\n)\n\n => Array\n(\n => 157.38.5.178\n)\n\n => Array\n(\n => 103.112.55.44\n)\n\n => Array\n(\n => 119.160.100.247\n)\n\n => Array\n(\n => 39.53.101.15\n)\n\n => Array\n(\n => 47.31.207.117\n)\n\n => Array\n(\n => 112.196.158.155\n)\n\n => Array\n(\n => 94.204.247.123\n)\n\n => Array\n(\n => 103.118.76.38\n)\n\n => Array\n(\n => 124.29.212.208\n)\n\n => Array\n(\n => 124.253.196.250\n)\n\n => Array\n(\n => 118.70.182.242\n)\n\n => Array\n(\n => 157.38.78.67\n)\n\n => Array\n(\n => 103.99.218.33\n)\n\n => Array\n(\n => 137.59.220.191\n)\n\n => Array\n(\n => 47.31.139.182\n)\n\n => Array\n(\n => 182.179.136.36\n)\n\n => Array\n(\n => 106.203.73.130\n)\n\n => Array\n(\n => 193.29.107.188\n)\n\n => Array\n(\n => 81.96.92.111\n)\n\n => Array\n(\n => 110.93.203.185\n)\n\n => Array\n(\n => 103.163.248.128\n)\n\n => Array\n(\n => 43.229.166.135\n)\n\n => Array\n(\n => 43.230.106.175\n)\n\n => Array\n(\n => 202.47.62.54\n)\n\n => Array\n(\n => 39.37.181.46\n)\n\n => Array\n(\n => 49.15.204.204\n)\n\n => Array\n(\n => 122.163.237.110\n)\n\n => Array\n(\n => 45.249.8.92\n)\n\n => Array\n(\n => 27.34.50.159\n)\n\n => Array\n(\n => 39.42.171.27\n)\n\n => Array\n(\n => 124.253.101.195\n)\n\n => Array\n(\n => 188.166.145.20\n)\n\n => Array\n(\n => 103.83.145.220\n)\n\n => Array\n(\n => 39.40.96.137\n)\n\n => Array\n(\n => 157.37.185.196\n)\n\n => Array\n(\n => 103.115.124.32\n)\n\n => Array\n(\n => 72.255.48.85\n)\n\n => Array\n(\n => 124.253.74.46\n)\n\n => Array\n(\n => 60.243.225.5\n)\n\n => Array\n(\n => 103.58.152.194\n)\n\n => Array\n(\n => 14.248.71.63\n)\n\n => Array\n(\n => 152.57.214.137\n)\n\n => Array\n(\n => 103.166.58.14\n)\n\n => Array\n(\n => 14.248.71.103\n)\n\n => Array\n(\n => 49.156.103.124\n)\n\n => Array\n(\n => 103.99.218.56\n)\n\n => Array\n(\n => 27.97.177.246\n)\n\n => Array\n(\n => 152.57.94.84\n)\n\n => Array\n(\n => 111.119.187.60\n)\n\n => Array\n(\n => 119.160.99.11\n)\n\n => Array\n(\n => 117.203.11.220\n)\n\n => Array\n(\n => 114.31.131.67\n)\n\n => Array\n(\n => 47.31.253.95\n)\n\n => Array\n(\n => 83.139.184.178\n)\n\n => Array\n(\n => 125.57.9.72\n)\n\n => Array\n(\n => 185.233.16.53\n)\n\n => Array\n(\n => 49.36.180.197\n)\n\n => Array\n(\n => 95.142.119.27\n)\n\n => Array\n(\n => 223.225.70.77\n)\n\n => Array\n(\n => 47.15.222.200\n)\n\n => Array\n(\n => 47.15.218.231\n)\n\n => Array\n(\n => 111.119.187.34\n)\n\n => Array\n(\n => 157.37.198.81\n)\n\n => Array\n(\n => 43.242.177.92\n)\n\n => Array\n(\n => 122.161.68.214\n)\n\n => Array\n(\n => 47.31.145.92\n)\n\n => Array\n(\n => 27.7.196.201\n)\n\n => Array\n(\n => 39.42.172.183\n)\n\n => Array\n(\n => 49.15.129.162\n)\n\n => Array\n(\n => 49.15.206.110\n)\n\n => Array\n(\n => 39.57.141.45\n)\n\n => Array\n(\n => 171.229.175.90\n)\n\n => Array\n(\n => 119.160.68.200\n)\n\n => Array\n(\n => 193.176.84.214\n)\n\n => Array\n(\n => 43.242.177.77\n)\n\n => Array\n(\n => 137.59.220.95\n)\n\n => Array\n(\n => 122.177.118.209\n)\n\n => Array\n(\n => 103.92.214.27\n)\n\n => Array\n(\n => 178.62.10.228\n)\n\n => Array\n(\n => 103.81.214.91\n)\n\n => Array\n(\n => 156.146.33.68\n)\n\n => Array\n(\n => 42.118.116.60\n)\n\n => Array\n(\n => 183.87.122.190\n)\n\n => Array\n(\n => 157.37.159.162\n)\n\n => Array\n(\n => 59.153.16.9\n)\n\n => Array\n(\n => 223.185.43.241\n)\n\n => Array\n(\n => 103.81.214.153\n)\n\n => Array\n(\n => 47.31.143.169\n)\n\n => Array\n(\n => 112.196.158.250\n)\n\n => Array\n(\n => 156.146.36.110\n)\n\n => Array\n(\n => 27.255.34.80\n)\n\n => Array\n(\n => 49.205.77.19\n)\n\n => Array\n(\n => 95.142.120.20\n)\n\n => Array\n(\n => 171.49.195.53\n)\n\n => Array\n(\n => 39.37.152.132\n)\n\n => Array\n(\n => 103.121.204.237\n)\n\n => Array\n(\n => 43.242.176.153\n)\n\n => Array\n(\n => 43.242.176.120\n)\n\n => Array\n(\n => 122.161.66.120\n)\n\n => Array\n(\n => 182.70.140.223\n)\n\n => Array\n(\n => 103.201.135.226\n)\n\n => Array\n(\n => 202.47.44.135\n)\n\n => Array\n(\n => 182.179.172.27\n)\n\n => Array\n(\n => 185.22.173.86\n)\n\n => Array\n(\n => 67.205.148.219\n)\n\n => Array\n(\n => 27.58.183.140\n)\n\n => Array\n(\n => 39.42.118.163\n)\n\n => Array\n(\n => 117.5.204.59\n)\n\n => Array\n(\n => 223.182.193.163\n)\n\n => Array\n(\n => 157.37.184.33\n)\n\n => Array\n(\n => 110.37.218.92\n)\n\n => Array\n(\n => 106.215.8.67\n)\n\n => Array\n(\n => 39.42.94.179\n)\n\n => Array\n(\n => 106.51.25.124\n)\n\n => Array\n(\n => 157.42.25.212\n)\n\n => Array\n(\n => 43.247.40.170\n)\n\n => Array\n(\n => 101.50.108.111\n)\n\n => Array\n(\n => 117.102.48.152\n)\n\n => Array\n(\n => 95.142.120.48\n)\n\n => Array\n(\n => 183.81.121.160\n)\n\n => Array\n(\n => 42.111.21.195\n)\n\n => Array\n(\n => 50.7.142.180\n)\n\n => Array\n(\n => 223.130.28.33\n)\n\n => Array\n(\n => 107.161.86.141\n)\n\n => Array\n(\n => 117.203.249.159\n)\n\n => Array\n(\n => 110.225.192.64\n)\n\n => Array\n(\n => 157.37.152.168\n)\n\n => Array\n(\n => 110.39.2.202\n)\n\n => Array\n(\n => 23.106.56.52\n)\n\n => Array\n(\n => 59.150.87.85\n)\n\n => Array\n(\n => 122.162.175.128\n)\n\n => Array\n(\n => 39.40.63.182\n)\n\n => Array\n(\n => 182.190.108.76\n)\n\n => Array\n(\n => 49.36.44.216\n)\n\n => Array\n(\n => 73.105.5.185\n)\n\n => Array\n(\n => 157.33.67.204\n)\n\n => Array\n(\n => 157.37.164.171\n)\n\n => Array\n(\n => 192.119.160.21\n)\n\n => Array\n(\n => 156.146.59.29\n)\n\n => Array\n(\n => 182.190.97.213\n)\n\n => Array\n(\n => 39.53.196.168\n)\n\n => Array\n(\n => 112.196.132.93\n)\n\n => Array\n(\n => 182.189.7.18\n)\n\n => Array\n(\n => 101.53.232.117\n)\n\n => Array\n(\n => 43.242.178.105\n)\n\n => Array\n(\n => 49.145.233.44\n)\n\n => Array\n(\n => 5.107.214.18\n)\n\n => Array\n(\n => 139.5.242.124\n)\n\n => Array\n(\n => 47.29.244.80\n)\n\n => Array\n(\n => 43.242.178.180\n)\n\n => Array\n(\n => 194.110.84.171\n)\n\n => Array\n(\n => 103.68.217.99\n)\n\n => Array\n(\n => 182.182.27.59\n)\n\n => Array\n(\n => 119.152.139.146\n)\n\n => Array\n(\n => 39.37.131.1\n)\n\n => Array\n(\n => 106.210.99.47\n)\n\n => Array\n(\n => 103.225.176.68\n)\n\n => Array\n(\n => 42.111.23.67\n)\n\n => Array\n(\n => 223.225.37.57\n)\n\n => Array\n(\n => 114.79.1.247\n)\n\n => Array\n(\n => 157.42.28.39\n)\n\n => Array\n(\n => 47.15.13.68\n)\n\n => Array\n(\n => 223.230.151.59\n)\n\n => Array\n(\n => 115.186.7.112\n)\n\n => Array\n(\n => 111.92.78.33\n)\n\n => Array\n(\n => 119.160.117.249\n)\n\n => Array\n(\n => 103.150.209.45\n)\n\n => Array\n(\n => 182.189.22.170\n)\n\n => Array\n(\n => 49.144.108.82\n)\n\n => Array\n(\n => 39.49.75.65\n)\n\n => Array\n(\n => 39.52.205.223\n)\n\n => Array\n(\n => 49.48.247.53\n)\n\n => Array\n(\n => 5.149.250.222\n)\n\n => Array\n(\n => 47.15.187.153\n)\n\n => Array\n(\n => 103.70.86.101\n)\n\n => Array\n(\n => 112.196.158.138\n)\n\n => Array\n(\n => 156.241.242.139\n)\n\n => Array\n(\n => 157.33.205.213\n)\n\n => Array\n(\n => 39.53.206.247\n)\n\n => Array\n(\n => 157.45.83.132\n)\n\n => Array\n(\n => 49.36.220.138\n)\n\n => Array\n(\n => 202.47.47.118\n)\n\n => Array\n(\n => 182.185.233.224\n)\n\n => Array\n(\n => 182.189.30.99\n)\n\n => Array\n(\n => 223.233.68.178\n)\n\n => Array\n(\n => 161.35.139.87\n)\n\n => Array\n(\n => 121.46.65.124\n)\n\n => Array\n(\n => 5.195.154.87\n)\n\n => Array\n(\n => 103.46.236.71\n)\n\n => Array\n(\n => 195.114.147.119\n)\n\n => Array\n(\n => 195.85.219.35\n)\n\n => Array\n(\n => 111.119.183.34\n)\n\n => Array\n(\n => 39.34.158.41\n)\n\n => Array\n(\n => 180.178.148.13\n)\n\n => Array\n(\n => 122.161.66.166\n)\n\n => Array\n(\n => 185.233.18.1\n)\n\n => Array\n(\n => 146.196.34.119\n)\n\n => Array\n(\n => 27.6.253.159\n)\n\n => Array\n(\n => 198.8.92.156\n)\n\n => Array\n(\n => 106.206.179.160\n)\n\n => Array\n(\n => 202.164.133.53\n)\n\n => Array\n(\n => 112.196.141.214\n)\n\n => Array\n(\n => 95.135.15.148\n)\n\n => Array\n(\n => 111.92.119.165\n)\n\n => Array\n(\n => 84.17.34.18\n)\n\n => Array\n(\n => 49.36.232.117\n)\n\n => Array\n(\n => 122.180.235.92\n)\n\n => Array\n(\n => 89.187.163.177\n)\n\n => Array\n(\n => 103.217.238.38\n)\n\n => Array\n(\n => 103.163.248.115\n)\n\n => Array\n(\n => 156.146.59.10\n)\n\n => Array\n(\n => 223.233.68.183\n)\n\n => Array\n(\n => 103.12.198.92\n)\n\n => Array\n(\n => 42.111.9.221\n)\n\n => Array\n(\n => 111.92.77.242\n)\n\n => Array\n(\n => 192.142.128.26\n)\n\n => Array\n(\n => 182.69.195.139\n)\n\n => Array\n(\n => 103.209.83.110\n)\n\n => Array\n(\n => 207.244.71.80\n)\n\n => Array\n(\n => 41.140.106.29\n)\n\n => Array\n(\n => 45.118.167.65\n)\n\n => Array\n(\n => 45.118.167.70\n)\n\n => Array\n(\n => 157.37.159.180\n)\n\n => Array\n(\n => 103.217.178.194\n)\n\n => Array\n(\n => 27.255.165.94\n)\n\n => Array\n(\n => 45.133.7.42\n)\n\n => Array\n(\n => 43.230.65.168\n)\n\n => Array\n(\n => 39.53.196.221\n)\n\n => Array\n(\n => 42.111.17.83\n)\n\n => Array\n(\n => 110.39.12.34\n)\n\n => Array\n(\n => 45.118.158.169\n)\n\n => Array\n(\n => 202.142.110.165\n)\n\n => Array\n(\n => 106.201.13.212\n)\n\n => Array\n(\n => 103.211.14.94\n)\n\n => Array\n(\n => 160.202.37.105\n)\n\n => Array\n(\n => 103.99.199.34\n)\n\n => Array\n(\n => 183.83.45.104\n)\n\n => Array\n(\n => 49.36.233.107\n)\n\n => Array\n(\n => 182.68.21.51\n)\n\n => Array\n(\n => 110.227.93.182\n)\n\n => Array\n(\n => 180.178.144.251\n)\n\n => Array\n(\n => 129.0.102.0\n)\n\n => Array\n(\n => 124.253.105.176\n)\n\n => Array\n(\n => 105.156.139.225\n)\n\n => Array\n(\n => 208.117.87.154\n)\n\n => Array\n(\n => 138.68.185.17\n)\n\n => Array\n(\n => 43.247.41.207\n)\n\n => Array\n(\n => 49.156.106.105\n)\n\n => Array\n(\n => 223.238.197.124\n)\n\n => Array\n(\n => 202.47.39.96\n)\n\n => Array\n(\n => 223.226.131.80\n)\n\n => Array\n(\n => 122.161.48.139\n)\n\n => Array\n(\n => 106.201.144.12\n)\n\n => Array\n(\n => 122.178.223.244\n)\n\n => Array\n(\n => 195.181.164.65\n)\n\n => Array\n(\n => 106.195.12.187\n)\n\n => Array\n(\n => 124.253.48.48\n)\n\n => Array\n(\n => 103.140.30.214\n)\n\n => Array\n(\n => 180.178.147.132\n)\n\n => Array\n(\n => 138.197.139.130\n)\n\n => Array\n(\n => 5.254.2.138\n)\n\n => Array\n(\n => 183.81.93.25\n)\n\n => Array\n(\n => 182.70.39.254\n)\n\n => Array\n(\n => 106.223.87.131\n)\n\n => Array\n(\n => 106.203.91.114\n)\n\n => Array\n(\n => 196.70.137.128\n)\n\n => Array\n(\n => 150.242.62.167\n)\n\n => Array\n(\n => 184.170.243.198\n)\n\n => Array\n(\n => 59.89.30.66\n)\n\n => Array\n(\n => 49.156.112.201\n)\n\n => Array\n(\n => 124.29.212.168\n)\n\n => Array\n(\n => 103.204.170.238\n)\n\n => Array\n(\n => 124.253.116.81\n)\n\n => Array\n(\n => 41.248.102.107\n)\n\n => Array\n(\n => 119.160.100.51\n)\n\n => Array\n(\n => 5.254.40.91\n)\n\n => Array\n(\n => 103.149.154.25\n)\n\n => Array\n(\n => 103.70.41.28\n)\n\n => Array\n(\n => 103.151.234.42\n)\n\n => Array\n(\n => 39.37.142.107\n)\n\n => Array\n(\n => 27.255.186.115\n)\n\n => Array\n(\n => 49.15.193.151\n)\n\n => Array\n(\n => 103.201.146.115\n)\n\n => Array\n(\n => 223.228.177.70\n)\n\n => Array\n(\n => 182.179.141.37\n)\n\n => Array\n(\n => 110.172.131.126\n)\n\n => Array\n(\n => 45.116.232.0\n)\n\n => Array\n(\n => 193.37.32.206\n)\n\n => Array\n(\n => 119.152.62.246\n)\n\n => Array\n(\n => 180.178.148.228\n)\n\n => Array\n(\n => 195.114.145.120\n)\n\n => Array\n(\n => 122.160.49.194\n)\n\n => Array\n(\n => 103.240.237.17\n)\n\n => Array\n(\n => 103.75.245.238\n)\n\n => Array\n(\n => 124.253.215.148\n)\n\n => Array\n(\n => 45.118.165.146\n)\n\n => Array\n(\n => 103.75.244.111\n)\n\n => Array\n(\n => 223.185.7.42\n)\n\n => Array\n(\n => 139.5.240.165\n)\n\n => Array\n(\n => 45.251.117.204\n)\n\n => Array\n(\n => 132.154.71.227\n)\n\n => Array\n(\n => 178.92.100.97\n)\n\n => Array\n(\n => 49.48.248.42\n)\n\n => Array\n(\n => 182.190.109.252\n)\n\n => Array\n(\n => 43.231.57.209\n)\n\n => Array\n(\n => 39.37.185.133\n)\n\n => Array\n(\n => 123.17.79.174\n)\n\n => Array\n(\n => 180.178.146.215\n)\n\n => Array\n(\n => 41.248.83.40\n)\n\n => Array\n(\n => 103.255.4.79\n)\n\n => Array\n(\n => 103.39.119.233\n)\n\n => Array\n(\n => 85.203.44.24\n)\n\n => Array\n(\n => 93.74.18.246\n)\n\n => Array\n(\n => 95.142.120.51\n)\n\n => Array\n(\n => 202.47.42.57\n)\n\n => Array\n(\n => 41.202.219.253\n)\n\n => Array\n(\n => 154.28.188.182\n)\n\n => Array\n(\n => 14.163.178.106\n)\n\n => Array\n(\n => 118.185.57.226\n)\n\n => Array\n(\n => 49.15.141.102\n)\n\n => Array\n(\n => 182.189.86.47\n)\n\n => Array\n(\n => 111.88.68.79\n)\n\n => Array\n(\n => 156.146.59.8\n)\n\n => Array\n(\n => 119.152.62.82\n)\n\n => Array\n(\n => 49.207.128.103\n)\n\n => Array\n(\n => 203.212.30.234\n)\n\n => Array\n(\n => 41.202.219.254\n)\n\n => Array\n(\n => 103.46.203.10\n)\n\n => Array\n(\n => 112.79.141.15\n)\n\n => Array\n(\n => 103.68.218.75\n)\n\n => Array\n(\n => 49.35.130.14\n)\n\n => Array\n(\n => 172.247.129.90\n)\n\n => Array\n(\n => 116.90.74.214\n)\n\n => Array\n(\n => 180.178.142.242\n)\n\n => Array\n(\n => 111.119.183.59\n)\n\n => Array\n(\n => 117.5.103.189\n)\n\n => Array\n(\n => 203.110.93.146\n)\n\n => Array\n(\n => 188.163.97.86\n)\n\n => Array\n(\n => 124.253.90.47\n)\n\n => Array\n(\n => 139.167.249.160\n)\n\n => Array\n(\n => 103.226.206.55\n)\n\n => Array\n(\n => 154.28.188.191\n)\n\n => Array\n(\n => 182.190.197.205\n)\n\n => Array\n(\n => 111.119.183.33\n)\n\n => Array\n(\n => 14.253.254.64\n)\n\n => Array\n(\n => 117.237.197.246\n)\n\n => Array\n(\n => 172.105.53.82\n)\n\n => Array\n(\n => 124.253.207.164\n)\n\n => Array\n(\n => 103.255.4.33\n)\n\n => Array\n(\n => 27.63.131.206\n)\n\n => Array\n(\n => 103.118.170.99\n)\n\n => Array\n(\n => 111.119.183.55\n)\n\n => Array\n(\n => 14.182.101.109\n)\n\n => Array\n(\n => 175.107.223.199\n)\n\n => Array\n(\n => 39.57.168.94\n)\n\n => Array\n(\n => 122.182.213.139\n)\n\n => Array\n(\n => 112.79.214.237\n)\n\n => Array\n(\n => 27.6.252.22\n)\n\n => Array\n(\n => 89.163.212.83\n)\n\n => Array\n(\n => 182.189.23.1\n)\n\n => Array\n(\n => 49.15.222.253\n)\n\n => Array\n(\n => 125.63.97.110\n)\n\n => Array\n(\n => 223.233.65.159\n)\n\n => Array\n(\n => 139.99.159.18\n)\n\n => Array\n(\n => 45.118.165.137\n)\n\n => Array\n(\n => 39.52.2.167\n)\n\n => Array\n(\n => 39.57.141.24\n)\n\n => Array\n(\n => 27.5.32.145\n)\n\n => Array\n(\n => 49.36.212.33\n)\n\n => Array\n(\n => 157.33.218.32\n)\n\n => Array\n(\n => 116.71.4.122\n)\n\n => Array\n(\n => 110.93.244.176\n)\n\n => Array\n(\n => 154.73.203.156\n)\n\n => Array\n(\n => 136.158.30.235\n)\n\n => Array\n(\n => 122.161.53.72\n)\n\n => Array\n(\n => 106.203.203.156\n)\n\n => Array\n(\n => 45.133.7.22\n)\n\n => Array\n(\n => 27.255.180.69\n)\n\n => Array\n(\n => 94.46.244.3\n)\n\n => Array\n(\n => 43.242.178.157\n)\n\n => Array\n(\n => 171.79.189.215\n)\n\n => Array\n(\n => 37.117.141.89\n)\n\n => Array\n(\n => 196.92.32.64\n)\n\n => Array\n(\n => 154.73.203.157\n)\n\n => Array\n(\n => 183.83.176.14\n)\n\n => Array\n(\n => 106.215.84.145\n)\n\n => Array\n(\n => 95.142.120.12\n)\n\n => Array\n(\n => 190.232.110.94\n)\n\n => Array\n(\n => 179.6.194.47\n)\n\n => Array\n(\n => 103.62.155.172\n)\n\n => Array\n(\n => 39.34.156.177\n)\n\n => Array\n(\n => 122.161.49.120\n)\n\n => Array\n(\n => 103.58.155.253\n)\n\n => Array\n(\n => 175.107.226.20\n)\n\n => Array\n(\n => 206.81.28.165\n)\n\n => Array\n(\n => 49.36.216.36\n)\n\n => Array\n(\n => 104.223.95.178\n)\n\n => Array\n(\n => 122.177.69.35\n)\n\n => Array\n(\n => 39.57.163.107\n)\n\n => Array\n(\n => 122.161.53.35\n)\n\n => Array\n(\n => 182.190.102.13\n)\n\n => Array\n(\n => 122.161.68.95\n)\n\n => Array\n(\n => 154.73.203.147\n)\n\n => Array\n(\n => 122.173.125.2\n)\n\n => Array\n(\n => 117.96.140.189\n)\n\n => Array\n(\n => 106.200.244.10\n)\n\n => Array\n(\n => 110.36.202.5\n)\n\n => Array\n(\n => 124.253.51.144\n)\n\n => Array\n(\n => 176.100.1.145\n)\n\n => Array\n(\n => 156.146.59.20\n)\n\n => Array\n(\n => 122.176.100.151\n)\n\n => Array\n(\n => 185.217.117.237\n)\n\n => Array\n(\n => 49.37.223.97\n)\n\n => Array\n(\n => 101.50.108.80\n)\n\n => Array\n(\n => 124.253.155.88\n)\n\n => Array\n(\n => 39.40.208.96\n)\n\n => Array\n(\n => 122.167.151.154\n)\n\n => Array\n(\n => 172.98.89.13\n)\n\n => Array\n(\n => 103.91.52.6\n)\n\n => Array\n(\n => 106.203.84.5\n)\n\n => Array\n(\n => 117.216.221.34\n)\n\n => Array\n(\n => 154.73.203.131\n)\n\n => Array\n(\n => 223.182.210.117\n)\n\n => Array\n(\n => 49.36.185.208\n)\n\n => Array\n(\n => 111.119.183.30\n)\n\n => Array\n(\n => 39.42.107.13\n)\n\n => Array\n(\n => 39.40.15.174\n)\n\n => Array\n(\n => 1.38.244.65\n)\n\n => Array\n(\n => 49.156.75.252\n)\n\n => Array\n(\n => 122.161.51.99\n)\n\n => Array\n(\n => 27.73.78.57\n)\n\n => Array\n(\n => 49.48.228.70\n)\n\n => Array\n(\n => 111.119.183.18\n)\n\n => Array\n(\n => 116.204.252.218\n)\n\n => Array\n(\n => 73.173.40.248\n)\n\n => Array\n(\n => 223.130.28.81\n)\n\n => Array\n(\n => 202.83.58.81\n)\n\n => Array\n(\n => 45.116.233.31\n)\n\n => Array\n(\n => 111.119.183.1\n)\n\n => Array\n(\n => 45.133.7.66\n)\n\n => Array\n(\n => 39.48.204.174\n)\n\n => Array\n(\n => 37.19.213.30\n)\n\n => Array\n(\n => 111.119.183.22\n)\n\n => Array\n(\n => 122.177.74.19\n)\n\n => Array\n(\n => 124.253.80.59\n)\n\n => Array\n(\n => 111.119.183.60\n)\n\n => Array\n(\n => 157.39.106.191\n)\n\n => Array\n(\n => 157.47.86.121\n)\n\n => Array\n(\n => 47.31.159.100\n)\n\n => Array\n(\n => 106.214.85.144\n)\n\n => Array\n(\n => 182.189.22.197\n)\n\n => Array\n(\n => 111.119.183.51\n)\n\n => Array\n(\n => 202.47.35.57\n)\n\n => Array\n(\n => 42.108.33.220\n)\n\n => Array\n(\n => 180.178.146.158\n)\n\n => Array\n(\n => 124.253.184.239\n)\n\n => Array\n(\n => 103.165.20.8\n)\n\n => Array\n(\n => 94.178.239.156\n)\n\n => Array\n(\n => 72.255.41.142\n)\n\n => Array\n(\n => 116.90.107.102\n)\n\n => Array\n(\n => 39.36.164.250\n)\n\n => Array\n(\n => 124.253.195.172\n)\n\n => Array\n(\n => 203.142.218.149\n)\n\n => Array\n(\n => 157.43.165.180\n)\n\n => Array\n(\n => 39.40.242.57\n)\n\n => Array\n(\n => 103.92.43.150\n)\n\n => Array\n(\n => 39.42.133.202\n)\n\n => Array\n(\n => 119.160.66.11\n)\n\n => Array\n(\n => 138.68.3.7\n)\n\n => Array\n(\n => 210.56.125.226\n)\n\n => Array\n(\n => 157.50.4.249\n)\n\n => Array\n(\n => 124.253.81.162\n)\n\n => Array\n(\n => 103.240.235.141\n)\n\n => Array\n(\n => 132.154.128.20\n)\n\n => Array\n(\n => 49.156.115.37\n)\n\n => Array\n(\n => 45.133.7.48\n)\n\n => Array\n(\n => 122.161.49.137\n)\n\n => Array\n(\n => 202.47.46.31\n)\n\n => Array\n(\n => 192.140.145.148\n)\n\n => Array\n(\n => 202.14.123.10\n)\n\n => Array\n(\n => 122.161.53.98\n)\n\n => Array\n(\n => 124.253.114.113\n)\n\n => Array\n(\n => 103.227.70.34\n)\n\n => Array\n(\n => 223.228.175.227\n)\n\n => Array\n(\n => 157.39.119.110\n)\n\n => Array\n(\n => 180.188.224.231\n)\n\n => Array\n(\n => 132.154.188.85\n)\n\n => Array\n(\n => 197.210.227.207\n)\n\n => Array\n(\n => 103.217.123.177\n)\n\n => Array\n(\n => 124.253.85.31\n)\n\n => Array\n(\n => 123.201.105.97\n)\n\n => Array\n(\n => 39.57.190.37\n)\n\n => Array\n(\n => 202.63.205.248\n)\n\n => Array\n(\n => 122.161.51.100\n)\n\n => Array\n(\n => 39.37.163.97\n)\n\n => Array\n(\n => 43.231.57.173\n)\n\n => Array\n(\n => 223.225.135.169\n)\n\n => Array\n(\n => 119.160.71.136\n)\n\n => Array\n(\n => 122.165.114.93\n)\n\n => Array\n(\n => 47.11.77.102\n)\n\n => Array\n(\n => 49.149.107.198\n)\n\n => Array\n(\n => 192.111.134.206\n)\n\n => Array\n(\n => 182.64.102.43\n)\n\n => Array\n(\n => 124.253.184.111\n)\n\n => Array\n(\n => 171.237.97.228\n)\n\n => Array\n(\n => 117.237.237.101\n)\n\n => Array\n(\n => 49.36.33.19\n)\n\n => Array\n(\n => 103.31.101.241\n)\n\n => Array\n(\n => 129.0.207.203\n)\n\n => Array\n(\n => 157.39.122.155\n)\n\n => Array\n(\n => 197.210.85.120\n)\n\n => Array\n(\n => 124.253.219.201\n)\n\n => Array\n(\n => 152.57.75.92\n)\n\n => Array\n(\n => 169.149.195.121\n)\n\n => Array\n(\n => 198.16.76.27\n)\n\n => Array\n(\n => 157.43.192.188\n)\n\n => Array\n(\n => 119.155.244.221\n)\n\n => Array\n(\n => 39.51.242.216\n)\n\n => Array\n(\n => 39.57.180.158\n)\n\n => Array\n(\n => 134.202.32.5\n)\n\n => Array\n(\n => 122.176.139.205\n)\n\n => Array\n(\n => 151.243.50.9\n)\n\n => Array\n(\n => 39.52.99.161\n)\n\n => Array\n(\n => 136.144.33.95\n)\n\n => Array\n(\n => 157.37.205.216\n)\n\n => Array\n(\n => 217.138.220.134\n)\n\n => Array\n(\n => 41.140.106.65\n)\n\n => Array\n(\n => 39.37.253.126\n)\n\n => Array\n(\n => 103.243.44.240\n)\n\n => Array\n(\n => 157.46.169.29\n)\n\n => Array\n(\n => 92.119.177.122\n)\n\n => Array\n(\n => 196.240.60.21\n)\n\n => Array\n(\n => 122.161.6.246\n)\n\n => Array\n(\n => 117.202.162.46\n)\n\n => Array\n(\n => 205.164.137.120\n)\n\n => Array\n(\n => 171.237.79.241\n)\n\n => Array\n(\n => 198.16.76.28\n)\n\n => Array\n(\n => 103.100.4.151\n)\n\n => Array\n(\n => 178.239.162.236\n)\n\n => Array\n(\n => 106.197.31.240\n)\n\n => Array\n(\n => 122.168.179.251\n)\n\n => Array\n(\n => 39.37.167.126\n)\n\n => Array\n(\n => 171.48.8.115\n)\n\n => Array\n(\n => 157.44.152.14\n)\n\n => Array\n(\n => 103.77.43.219\n)\n\n => Array\n(\n => 122.161.49.38\n)\n\n => Array\n(\n => 122.161.52.83\n)\n\n => Array\n(\n => 122.173.108.210\n)\n\n => Array\n(\n => 60.254.109.92\n)\n\n => Array\n(\n => 103.57.85.75\n)\n\n => Array\n(\n => 106.0.58.36\n)\n\n => Array\n(\n => 122.161.49.212\n)\n\n => Array\n(\n => 27.255.182.159\n)\n\n => Array\n(\n => 116.75.230.159\n)\n\n => Array\n(\n => 122.173.152.133\n)\n\n => Array\n(\n => 129.0.79.247\n)\n\n => Array\n(\n => 223.228.163.44\n)\n\n => Array\n(\n => 103.168.78.82\n)\n\n => Array\n(\n => 39.59.67.124\n)\n\n => Array\n(\n => 182.69.19.120\n)\n\n => Array\n(\n => 196.202.236.195\n)\n\n => Array\n(\n => 137.59.225.206\n)\n\n => Array\n(\n => 143.110.209.194\n)\n\n => Array\n(\n => 117.201.233.91\n)\n\n => Array\n(\n => 37.120.150.107\n)\n\n => Array\n(\n => 58.65.222.10\n)\n\n => Array\n(\n => 202.47.43.86\n)\n\n => Array\n(\n => 106.206.223.234\n)\n\n => Array\n(\n => 5.195.153.158\n)\n\n => Array\n(\n => 223.227.127.243\n)\n\n => Array\n(\n => 103.165.12.222\n)\n\n => Array\n(\n => 49.36.185.189\n)\n\n => Array\n(\n => 59.96.92.57\n)\n\n => Array\n(\n => 203.194.104.235\n)\n\n => Array\n(\n => 122.177.72.33\n)\n\n => Array\n(\n => 106.213.126.40\n)\n\n => Array\n(\n => 45.127.232.69\n)\n\n => Array\n(\n => 156.146.59.39\n)\n\n => Array\n(\n => 103.21.184.11\n)\n\n => Array\n(\n => 106.212.47.59\n)\n\n => Array\n(\n => 182.179.137.235\n)\n\n => Array\n(\n => 49.36.178.154\n)\n\n => Array\n(\n => 171.48.7.128\n)\n\n => Array\n(\n => 119.160.57.96\n)\n\n => Array\n(\n => 197.210.79.92\n)\n\n => Array\n(\n => 36.255.45.87\n)\n\n => Array\n(\n => 47.31.219.47\n)\n\n => Array\n(\n => 122.161.51.160\n)\n\n => Array\n(\n => 103.217.123.129\n)\n\n => Array\n(\n => 59.153.16.12\n)\n\n => Array\n(\n => 103.92.43.226\n)\n\n => Array\n(\n => 47.31.139.139\n)\n\n => Array\n(\n => 210.2.140.18\n)\n\n => Array\n(\n => 106.210.33.219\n)\n\n => Array\n(\n => 175.107.203.34\n)\n\n => Array\n(\n => 146.196.32.144\n)\n\n => Array\n(\n => 103.12.133.121\n)\n\n => Array\n(\n => 103.59.208.182\n)\n\n => Array\n(\n => 157.37.190.232\n)\n\n => Array\n(\n => 106.195.35.201\n)\n\n => Array\n(\n => 27.122.14.83\n)\n\n => Array\n(\n => 194.193.44.5\n)\n\n => Array\n(\n => 5.62.43.245\n)\n\n => Array\n(\n => 103.53.80.50\n)\n\n => Array\n(\n => 47.29.142.233\n)\n\n => Array\n(\n => 154.6.20.63\n)\n\n => Array\n(\n => 173.245.203.128\n)\n\n => Array\n(\n => 103.77.43.231\n)\n\n => Array\n(\n => 5.107.166.235\n)\n\n => Array\n(\n => 106.212.44.123\n)\n\n => Array\n(\n => 157.41.60.93\n)\n\n => Array\n(\n => 27.58.179.79\n)\n\n => Array\n(\n => 157.37.167.144\n)\n\n => Array\n(\n => 119.160.57.115\n)\n\n => Array\n(\n => 122.161.53.224\n)\n\n => Array\n(\n => 49.36.233.51\n)\n\n => Array\n(\n => 101.0.32.8\n)\n\n => Array\n(\n => 119.160.103.158\n)\n\n => Array\n(\n => 122.177.79.115\n)\n\n => Array\n(\n => 107.181.166.27\n)\n\n => Array\n(\n => 183.6.0.125\n)\n\n => Array\n(\n => 49.36.186.0\n)\n\n => Array\n(\n => 202.181.5.4\n)\n\n => Array\n(\n => 45.118.165.144\n)\n\n => Array\n(\n => 171.96.157.133\n)\n\n => Array\n(\n => 222.252.51.163\n)\n\n => Array\n(\n => 103.81.215.162\n)\n\n => Array\n(\n => 110.225.93.208\n)\n\n => Array\n(\n => 122.161.48.200\n)\n\n => Array\n(\n => 119.63.138.173\n)\n\n => Array\n(\n => 202.83.58.208\n)\n\n => Array\n(\n => 122.161.53.101\n)\n\n => Array\n(\n => 137.97.95.21\n)\n\n => Array\n(\n => 112.204.167.123\n)\n\n => Array\n(\n => 122.180.21.151\n)\n\n => Array\n(\n => 103.120.44.108\n)\n\n => Array\n(\n => 49.37.220.174\n)\n\n => Array\n(\n => 1.55.255.124\n)\n\n => Array\n(\n => 23.227.140.173\n)\n\n => Array\n(\n => 43.248.153.110\n)\n\n => Array\n(\n => 106.214.93.101\n)\n\n => Array\n(\n => 103.83.149.36\n)\n\n => Array\n(\n => 103.217.123.57\n)\n\n => Array\n(\n => 193.9.113.119\n)\n\n => Array\n(\n => 14.182.57.204\n)\n\n => Array\n(\n => 117.201.231.0\n)\n\n => Array\n(\n => 14.99.198.186\n)\n\n => Array\n(\n => 36.255.44.204\n)\n\n => Array\n(\n => 103.160.236.42\n)\n\n => Array\n(\n => 31.202.16.116\n)\n\n => Array\n(\n => 223.239.49.201\n)\n\n => Array\n(\n => 122.161.102.149\n)\n\n => Array\n(\n => 117.196.123.184\n)\n\n => Array\n(\n => 49.205.112.105\n)\n\n => Array\n(\n => 103.244.176.201\n)\n\n => Array\n(\n => 95.216.15.219\n)\n\n => Array\n(\n => 103.107.196.174\n)\n\n => Array\n(\n => 203.190.34.65\n)\n\n => Array\n(\n => 23.227.140.182\n)\n\n => Array\n(\n => 171.79.74.74\n)\n\n => Array\n(\n => 106.206.223.244\n)\n\n => Array\n(\n => 180.151.28.140\n)\n\n => Array\n(\n => 165.225.124.114\n)\n\n => Array\n(\n => 106.206.223.252\n)\n\n => Array\n(\n => 39.62.23.38\n)\n\n => Array\n(\n => 112.211.252.33\n)\n\n => Array\n(\n => 146.70.66.242\n)\n\n => Array\n(\n => 222.252.51.38\n)\n\n => Array\n(\n => 122.162.151.223\n)\n\n => Array\n(\n => 180.178.154.100\n)\n\n => Array\n(\n => 180.94.33.94\n)\n\n => Array\n(\n => 205.164.130.82\n)\n\n => Array\n(\n => 117.196.114.167\n)\n\n => Array\n(\n => 43.224.0.189\n)\n\n => Array\n(\n => 154.6.20.59\n)\n\n => Array\n(\n => 122.161.131.67\n)\n\n => Array\n(\n => 70.68.68.159\n)\n\n => Array\n(\n => 103.125.130.200\n)\n\n => Array\n(\n => 43.242.176.147\n)\n\n => Array\n(\n => 129.0.102.29\n)\n\n => Array\n(\n => 182.64.180.32\n)\n\n => Array\n(\n => 110.93.250.196\n)\n\n => Array\n(\n => 139.135.57.197\n)\n\n => Array\n(\n => 157.33.219.2\n)\n\n => Array\n(\n => 205.253.123.239\n)\n\n => Array\n(\n => 122.177.66.119\n)\n\n => Array\n(\n => 182.64.105.252\n)\n\n => Array\n(\n => 14.97.111.154\n)\n\n => Array\n(\n => 146.196.35.35\n)\n\n => Array\n(\n => 103.167.162.205\n)\n\n => Array\n(\n => 37.111.130.245\n)\n\n => Array\n(\n => 49.228.51.196\n)\n\n => Array\n(\n => 157.39.148.205\n)\n\n => Array\n(\n => 129.0.102.28\n)\n\n => Array\n(\n => 103.82.191.229\n)\n\n => Array\n(\n => 194.104.23.140\n)\n\n => Array\n(\n => 49.205.193.252\n)\n\n => Array\n(\n => 222.252.33.119\n)\n\n => Array\n(\n => 173.255.132.114\n)\n\n => Array\n(\n => 182.64.148.162\n)\n\n => Array\n(\n => 175.176.87.8\n)\n\n => Array\n(\n => 5.62.57.6\n)\n\n => Array\n(\n => 119.160.96.229\n)\n\n => Array\n(\n => 49.205.180.226\n)\n\n => Array\n(\n => 95.142.120.59\n)\n\n => Array\n(\n => 183.82.116.204\n)\n\n => Array\n(\n => 202.89.69.186\n)\n\n => Array\n(\n => 39.48.165.36\n)\n\n => Array\n(\n => 192.140.149.81\n)\n\n => Array\n(\n => 198.16.70.28\n)\n\n => Array\n(\n => 103.25.250.236\n)\n\n => Array\n(\n => 106.76.202.244\n)\n\n => Array\n(\n => 47.8.8.165\n)\n\n => Array\n(\n => 202.5.145.213\n)\n\n => Array\n(\n => 106.212.188.243\n)\n\n => Array\n(\n => 106.215.89.2\n)\n\n => Array\n(\n => 119.82.83.148\n)\n\n => Array\n(\n => 123.24.164.245\n)\n\n => Array\n(\n => 187.67.51.106\n)\n\n => Array\n(\n => 117.196.119.95\n)\n\n => Array\n(\n => 95.142.120.66\n)\n\n => Array\n(\n => 156.146.59.35\n)\n\n => Array\n(\n => 49.205.213.148\n)\n\n => Array\n(\n => 111.223.27.206\n)\n\n => Array\n(\n => 49.205.212.86\n)\n\n => Array\n(\n => 103.77.42.103\n)\n\n => Array\n(\n => 110.227.62.25\n)\n\n => Array\n(\n => 122.179.54.140\n)\n\n => Array\n(\n => 157.39.239.81\n)\n\n => Array\n(\n => 138.128.27.234\n)\n\n => Array\n(\n => 103.244.176.194\n)\n\n => Array\n(\n => 130.105.10.127\n)\n\n => Array\n(\n => 103.116.250.191\n)\n\n => Array\n(\n => 122.180.186.6\n)\n\n => Array\n(\n => 101.53.228.52\n)\n\n => Array\n(\n => 39.57.138.90\n)\n\n => Array\n(\n => 197.156.137.165\n)\n\n => Array\n(\n => 49.37.155.78\n)\n\n => Array\n(\n => 39.59.81.32\n)\n\n => Array\n(\n => 45.127.44.78\n)\n\n => Array\n(\n => 103.58.155.83\n)\n\n => Array\n(\n => 175.107.220.20\n)\n\n => Array\n(\n => 14.255.9.197\n)\n\n => Array\n(\n => 103.55.63.146\n)\n\n => Array\n(\n => 49.205.138.81\n)\n\n => Array\n(\n => 45.35.222.243\n)\n\n => Array\n(\n => 203.190.34.57\n)\n\n => Array\n(\n => 205.253.121.11\n)\n\n => Array\n(\n => 154.72.171.177\n)\n\n => Array\n(\n => 39.52.203.37\n)\n\n => Array\n(\n => 122.161.52.2\n)\n\n => Array\n(\n => 82.145.41.170\n)\n\n => Array\n(\n => 103.217.123.33\n)\n\n => Array\n(\n => 103.150.238.100\n)\n\n => Array\n(\n => 125.99.11.182\n)\n\n => Array\n(\n => 103.217.178.70\n)\n\n => Array\n(\n => 197.210.227.95\n)\n\n => Array\n(\n => 116.75.212.153\n)\n\n => Array\n(\n => 212.102.42.202\n)\n\n => Array\n(\n => 49.34.177.147\n)\n\n => Array\n(\n => 173.242.123.110\n)\n\n => Array\n(\n => 49.36.35.254\n)\n\n => Array\n(\n => 202.47.59.82\n)\n\n => Array\n(\n => 157.42.197.119\n)\n\n => Array\n(\n => 103.99.196.250\n)\n\n => Array\n(\n => 119.155.228.244\n)\n\n => Array\n(\n => 130.105.160.170\n)\n\n => Array\n(\n => 78.132.235.189\n)\n\n => Array\n(\n => 202.142.186.114\n)\n\n => Array\n(\n => 115.99.156.136\n)\n\n => Array\n(\n => 14.162.166.254\n)\n\n => Array\n(\n => 157.39.133.205\n)\n\n => Array\n(\n => 103.196.139.157\n)\n\n => Array\n(\n => 139.99.159.20\n)\n\n => Array\n(\n => 175.176.87.42\n)\n\n => Array\n(\n => 103.46.202.244\n)\n\n => Array\n(\n => 175.176.87.16\n)\n\n => Array\n(\n => 49.156.85.55\n)\n\n => Array\n(\n => 157.39.101.65\n)\n\n => Array\n(\n => 124.253.195.93\n)\n\n => Array\n(\n => 110.227.59.8\n)\n\n => Array\n(\n => 157.50.50.6\n)\n\n => Array\n(\n => 95.142.120.25\n)\n\n => Array\n(\n => 49.36.186.141\n)\n\n => Array\n(\n => 110.227.54.161\n)\n\n => Array\n(\n => 88.117.62.180\n)\n\n => Array\n(\n => 110.227.57.8\n)\n\n => Array\n(\n => 106.200.36.21\n)\n\n => Array\n(\n => 202.131.143.247\n)\n\n => Array\n(\n => 103.46.202.4\n)\n\n => Array\n(\n => 122.177.78.217\n)\n\n => Array\n(\n => 124.253.195.201\n)\n\n => Array\n(\n => 27.58.17.91\n)\n\n => Array\n(\n => 223.228.143.162\n)\n\n => Array\n(\n => 119.160.96.233\n)\n\n => Array\n(\n => 49.156.69.213\n)\n\n => Array\n(\n => 41.80.97.54\n)\n\n => Array\n(\n => 122.176.207.193\n)\n\n => Array\n(\n => 45.118.156.6\n)\n\n => Array\n(\n => 157.39.154.210\n)\n\n => Array\n(\n => 103.48.197.173\n)\n\n => Array\n(\n => 103.46.202.98\n)\n\n => Array\n(\n => 157.43.214.102\n)\n\n => Array\n(\n => 180.191.125.73\n)\n\n => Array\n(\n => 157.39.57.227\n)\n\n => Array\n(\n => 119.152.107.157\n)\n\n => Array\n(\n => 103.166.58.254\n)\n\n => Array\n(\n => 139.135.54.30\n)\n\n => Array\n(\n => 185.199.102.101\n)\n\n => Array\n(\n => 59.103.204.4\n)\n\n => Array\n(\n => 202.14.123.34\n)\n\n => Array\n(\n => 103.48.197.243\n)\n\n => Array\n(\n => 39.34.158.129\n)\n\n => Array\n(\n => 156.146.59.38\n)\n\n => Array\n(\n => 110.227.51.220\n)\n\n => Array\n(\n => 157.45.123.212\n)\n\n => Array\n(\n => 110.227.52.20\n)\n\n => Array\n(\n => 43.242.176.136\n)\n\n => Array\n(\n => 223.233.71.146\n)\n\n => Array\n(\n => 216.73.160.38\n)\n\n => Array\n(\n => 103.217.178.136\n)\n\n => Array\n(\n => 116.71.2.6\n)\n\n => Array\n(\n => 182.180.126.109\n)\n\n => Array\n(\n => 39.53.148.157\n)\n\n => Array\n(\n => 42.108.5.233\n)\n\n => Array\n(\n => 106.66.3.13\n)\n\n => Array\n(\n => 122.177.70.225\n)\n\n => Array\n(\n => 124.253.153.145\n)\n\n => Array\n(\n => 27.58.82.70\n)\n\n => Array\n(\n => 180.149.241.195\n)\n\n => Array\n(\n => 106.201.27.247\n)\n\n => Array\n(\n => 110.227.55.161\n)\n\n => Array\n(\n => 157.39.27.241\n)\n\n => Array\n(\n => 119.157.73.172\n)\n\n => Array\n(\n => 139.167.175.124\n)\n\n => Array\n(\n => 103.224.35.251\n)\n\n => Array\n(\n => 103.16.30.236\n)\n\n => Array\n(\n => 110.227.62.54\n)\n\n => Array\n(\n => 39.34.159.83\n)\n\n => Array\n(\n => 223.233.72.228\n)\n\n => Array\n(\n => 110.227.61.118\n)\n\n => Array\n(\n => 202.142.146.254\n)\n\n => Array\n(\n => 39.57.211.201\n)\n\n => Array\n(\n => 103.150.238.78\n)\n\n => Array\n(\n => 157.38.90.66\n)\n\n => Array\n(\n => 205.253.127.236\n)\n\n => Array\n(\n => 157.47.11.111\n)\n\n => Array\n(\n => 122.161.49.87\n)\n\n => Array\n(\n => 110.227.53.55\n)\n\n => Array\n(\n => 125.209.112.14\n)\n\n => Array\n(\n => 146.196.34.86\n)\n\n => Array\n(\n => 212.102.60.173\n)\n\n => Array\n(\n => 103.48.197.213\n)\n\n => Array\n(\n => 37.111.248.140\n)\n\n => Array\n(\n => 125.99.220.195\n)\n\n => Array\n(\n => 202.47.40.5\n)\n\n => Array\n(\n => 49.206.117.79\n)\n\n => Array\n(\n => 103.16.30.237\n)\n\n => Array\n(\n => 122.161.237.115\n)\n\n => Array\n(\n => 110.227.55.14\n)\n\n => Array\n(\n => 157.39.61.222\n)\n\n => Array\n(\n => 49.15.201.99\n)\n\n => Array\n(\n => 103.48.197.125\n)\n\n => Array\n(\n => 49.36.209.138\n)\n\n => Array\n(\n => 184.75.211.148\n)\n\n => Array\n(\n => 122.176.203.217\n)\n\n => Array\n(\n => 183.83.217.201\n)\n\n => Array\n(\n => 110.227.55.131\n)\n\n => Array\n(\n => 103.39.116.83\n)\n\n => Array\n(\n => 72.255.36.239\n)\n\n => Array\n(\n => 103.18.170.97\n)\n\n => Array\n(\n => 113.191.28.233\n)\n\n => Array\n(\n => 157.45.60.175\n)\n\n => Array\n(\n => 49.37.215.143\n)\n\n => Array\n(\n => 103.48.197.184\n)\n\n => Array\n(\n => 103.48.67.155\n)\n\n => Array\n(\n => 103.244.178.248\n)\n\n => Array\n(\n => 81.92.203.92\n)\n\n => Array\n(\n => 194.71.227.72\n)\n\n => Array\n(\n => 39.57.225.212\n)\n\n => Array\n(\n => 223.233.78.41\n)\n\n => Array\n(\n => 210.89.58.87\n)\n\n => Array\n(\n => 39.57.253.146\n)\n\n => Array\n(\n => 132.154.171.12\n)\n\n => Array\n(\n => 39.59.5.11\n)\n\n => Array\n(\n => 49.156.111.206\n)\n\n => Array\n(\n => 223.233.70.184\n)\n\n => Array\n(\n => 103.99.12.161\n)\n\n => Array\n(\n => 122.161.75.127\n)\n\n => Array\n(\n => 154.6.17.167\n)\n\n => Array\n(\n => 122.180.182.155\n)\n\n => Array\n(\n => 103.150.27.242\n)\n\n => Array\n(\n => 111.88.100.37\n)\n\n => Array\n(\n => 103.46.202.226\n)\n\n => Array\n(\n => 49.37.153.19\n)\n\n => Array\n(\n => 59.103.106.175\n)\n\n => Array\n(\n => 223.233.69.225\n)\n\n => Array\n(\n => 47.15.16.32\n)\n\n => Array\n(\n => 68.235.33.144\n)\n\n => Array\n(\n => 175.107.235.13\n)\n\n => Array\n(\n => 103.102.123.3\n)\n\n => Array\n(\n => 124.253.42.245\n)\n\n => Array\n(\n => 122.176.206.47\n)\n\n => Array\n(\n => 103.99.12.174\n)\n\n => Array\n(\n => 157.47.87.185\n)\n\n => Array\n(\n => 185.212.168.140\n)\n\n => Array\n(\n => 128.199.22.216\n)\n\n => Array\n(\n => 89.232.35.12\n)\n\n => Array\n(\n => 27.147.205.190\n)\n\n => Array\n(\n => 14.177.165.23\n)\n\n => Array\n(\n => 103.99.12.171\n)\n\n => Array\n(\n => 103.212.158.118\n)\n\n => Array\n(\n => 157.39.57.48\n)\n\n => Array\n(\n => 27.58.8.185\n)\n\n => Array\n(\n => 27.255.165.154\n)\n\n => Array\n(\n => 49.207.195.198\n)\n\n => Array\n(\n => 202.143.116.253\n)\n\n => Array\n(\n => 157.39.43.206\n)\n\n => Array\n(\n => 124.253.226.92\n)\n\n => Array\n(\n => 157.39.31.58\n)\n\n => Array\n(\n => 122.161.50.210\n)\n\n => Array\n(\n => 195.206.169.201\n)\n\n => Array\n(\n => 223.228.182.16\n)\n\n => Array\n(\n => 45.115.104.215\n)\n\n => Array\n(\n => 72.255.43.59\n)\n\n => Array\n(\n => 103.31.100.248\n)\n\n => Array\n(\n => 124.253.203.239\n)\n\n => Array\n(\n => 47.9.89.148\n)\n\n => Array\n(\n => 112.196.174.206\n)\n\n => Array\n(\n => 183.82.0.22\n)\n\n => Array\n(\n => 103.198.97.150\n)\n\n => Array\n(\n => 103.166.150.48\n)\n\n => Array\n(\n => 43.242.178.203\n)\n\n => Array\n(\n => 103.244.179.176\n)\n\n => Array\n(\n => 122.177.76.132\n)\n\n => Array\n(\n => 223.178.211.71\n)\n\n => Array\n(\n => 39.57.147.246\n)\n\n => Array\n(\n => 117.99.196.22\n)\n\n => Array\n(\n => 203.212.30.241\n)\n\n => Array\n(\n => 180.195.208.38\n)\n\n => Array\n(\n => 122.176.196.31\n)\n\n => Array\n(\n => 157.34.63.20\n)\n\n => Array\n(\n => 157.37.181.194\n)\n\n => Array\n(\n => 1.37.81.200\n)\n\n => Array\n(\n => 115.97.194.237\n)\n\n => Array\n(\n => 146.70.46.30\n)\n\n => Array\n(\n => 42.113.204.159\n)\n\n => Array\n(\n => 59.88.116.16\n)\n\n => Array\n(\n => 103.31.94.20\n)\n\n => Array\n(\n => 42.113.157.49\n)\n\n => Array\n(\n => 115.97.141.167\n)\n\n => Array\n(\n => 154.72.169.33\n)\n\n => Array\n(\n => 103.99.12.164\n)\n\n => Array\n(\n => 120.29.98.198\n)\n\n => Array\n(\n => 37.120.197.37\n)\n\n => Array\n(\n => 198.16.66.157\n)\n\n => Array\n(\n => 27.6.200.177\n)\n\n => Array\n(\n => 182.190.206.136\n)\n\n => Array\n(\n => 120.29.98.174\n)\n\n => Array\n(\n => 193.34.232.24\n)\n\n => Array\n(\n => 27.6.201.83\n)\n\n => Array\n(\n => 202.47.45.186\n)\n\n => Array\n(\n => 39.42.36.181\n)\n\n => Array\n(\n => 103.49.200.43\n)\n\n => Array\n(\n => 37.19.218.8\n)\n\n => Array\n(\n => 49.37.191.10\n)\n\n => Array\n(\n => 212.102.46.68\n)\n\n => Array\n(\n => 39.59.62.91\n)\n\n => Array\n(\n => 43.239.205.65\n)\n\n => Array\n(\n => 122.2.72.187\n)\n\n => Array\n(\n => 193.19.109.23\n)\n\n => Array\n(\n => 136.185.37.86\n)\n\n => Array\n(\n => 103.246.41.8\n)\n\n => Array\n(\n => 103.174.242.77\n)\n\n => Array\n(\n => 39.59.4.29\n)\n\n => Array\n(\n => 49.36.222.31\n)\n\n => Array\n(\n => 49.36.178.90\n)\n\n => Array\n(\n => 180.190.65.169\n)\n\n => Array\n(\n => 27.6.205.158\n)\n\n => Array\n(\n => 122.161.48.3\n)\n\n => Array\n(\n => 183.83.212.172\n)\n\n => Array\n(\n => 157.36.118.177\n)\n\n => Array\n(\n => 106.213.54.136\n)\n\n => Array\n(\n => 27.6.206.142\n)\n\n => Array\n(\n => 116.107.29.76\n)\n\n => Array\n(\n => 59.103.191.90\n)\n\n => Array\n(\n => 59.94.175.89\n)\n\n => Array\n(\n => 103.59.75.148\n)\n\n)\nArchive for October, 2017: Reaching For Financial Security\n Layout: Blue and Brown (Default) Author's Creation\n Home > Archive: October, 2017"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.989286,"math_prob":0.9994973,"size":823,"snap":"2022-05-2022-21","text_gpt3_token_len":200,"char_repetition_ratio":0.0952381,"word_repetition_ratio":0.0,"special_character_ratio":0.23572296,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997806,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-18T09:47:35Z\",\"WARC-Record-ID\":\"<urn:uuid:e1c68722-c3af-47be-b6c8-c4197f7f1a08>\",\"Content-Length\":\"345645\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ee7e5a1-dabe-4010-a994-7bc001f3e847>\",\"WARC-Concurrent-To\":\"<urn:uuid:eefeaf84-cf0a-4cae-bb91-5b2f5094fe07>\",\"WARC-IP-Address\":\"173.231.200.26\",\"WARC-Target-URI\":\"https://petunia100.savingadvice.com/2017/10/\",\"WARC-Payload-Digest\":\"sha1:S4FGO7KSQI5U4JGU4GQNK2TCRY7D5FCR\",\"WARC-Block-Digest\":\"sha1:NFFKY3CQKPMPXT5B4IXYGSWBFPT7UYA4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300810.66_warc_CC-MAIN-20220118092443-20220118122443-00594.warc.gz\"}"} |
https://new.hindawi.com/journals/jam/2013/873670/ | [
"/ / Article\nSpecial Issue\n\n## Intelligent Modeling and Verification\n\nView this Special Issue\n\nResearch Article | Open Access\n\nVolume 2013 |Article ID 873670 | 10 pages | https://doi.org/10.1155/2013/873670\n\n# Chaotic Hopfield Neural Network Swarm Optimization and Its Application\n\nAccepted20 Mar 2013\nPublished15 Apr 2013\n\n#### Abstract\n\nA new neural network based optimization algorithm is proposed. The presented model is a discrete-time, continuous-state Hopfield neural network and the states of the model are updated synchronously. The proposed algorithm combines the advantages of traditional PSO, chaos and Hopfield neural networks: particles learn from their own experience and the experiences of surrounding particles, their search behavior is ergodic, and convergence of the swarm is guaranteed. The effectiveness of the proposed approach is demonstrated using simulations and typical optimization problems.\n\n#### 1. Introduction\n\nThe discovery of chaos in astronomical, solar, fluid, and other systems sparked significant research in nonlinear dynamics exhibiting chaos. Chaos was found to be useful and have great potential in many disciplines such as mixing liquids with low power consumption, presenting outages in power systems, biomedical engineering applications involving signals from the brain and heart, to name just a few . Chaotic systems exhibit three important properties. Firstly, a deterministic system is said to be chaotic whenever its evolution sensitively depends on the initial conditions. Secondly, there is an infinite number of unstable periodic orbits embedded in the underlying chaotic set. Thirdly, the dynamics of the chaotic attractor is ergodic, which implies that during its temporal evolution the system ergodically visits small neighborhoods around every point in each one of the unstable periodic orbits embedded within the chaotic attractor. Although it appears to be stochastic, it is generated by a deterministic nonlinear system. Lyapunov exponents characterize quantitatively stochastic properties of the dynamical systems. When the dynamical system is chaotic, there exists at least one lyapunov exponent . It is reported that chaotic behavior also exists in biological neurons and neural networks [2, 3]. Using chaos to develop novel optimization techniques gained much attention during the last decade. For a given energy or cost function, the chaotic ergodic orbits of a chaotic dynamic system used for optimization may eventually reach the global optimum or a point close to it with high probability [4, 5].\n\nSince Hopfield and Tank applied their neural network to the travelling salesman problem, neural networks have provided a powerful approach to a wide variety of optimization problems [7, 8]. However the Hopfield neural network (HNN) often gets trapped in a local minima. A number of modifications were made to Hopfield neural networks to escape from local minima. Some modifications, based on chaotic neural networks and simulated annealing , were proposed to solve global optimization problems . In the guaranteed convergence of Hopfield neural networks is discussed.\n\nParticle swarm optimization (PSO), developed by Clerc and Kennedy in 2002 , is a stochastic global optimization method which is based on simulation of social behavior. In a particle swarm optimizer, individuals “evolve\" by cooperating with other individuals over several generations. Each particle adjusts its flying according to its own flying experiences and the flying experience of its companions. Each individual is named as a particle which, in fact, represents a potential solution to a problem. Each particle is treated as a point in a -dimensional space. However, the PSO algorithm is likely to temporarily get stuck and may need a long period of time to escape from a local extremum . It is difficult to guarantee the convergence of the swarm, especially when random parameters are used. In order to improve the dynamical behavior of PSO, one can combine chaos with PSO algorithms to enhance the performance of PSO. In chaos were applied to the PSO to avoid the PSO getting trapped in local minima.\n\nPSO is motivated by the behavior of organisms such as fish schooling and bird flocking . During the process, future particle positions (determined by velocity) can be regarded as particle intelligence . Using a chaotic intelligent swarm system to replace the original PSO might be convenient for analysis while maintaining stochastic search properties. Most importantly, the convergence of a particle swarm initialized with random weights is not guaranteed.\n\nIn this paper we propose a chaotic Hopfield neural network swarm optimization (CHNNSO) algorithm. The rest of the paper is organized as follows. In Section 2, the preliminaries of Hopfield neural networks and PSO are described. The chaotic Hopfield neural network model is developed in Section 3. In Section 4, the dynamics of the chaotic Hopfield neural network is analyzed. Section 5 provides simulation results and comparisons. The conclusion is given in Section 6.\n\n#### 2. Preliminaries\n\n##### 2.1. Basic Hopfield Neural Network Theory \n\nA Hopfield net is a recurrent neural network having a synaptic connection pattern such that there is an underlying Lyapunov energy function for the activity dynamics. Started in any initial state, the state of the system evolves to a final state that is a (local) minimum of the Lyapunov energy function. The Lyapunov energy function decreases in a monotone fashion under the dynamics and is bounded below. Because of the existence of an elementary Lyapunov energy function for the dynamics, the only possible asymptotic result is a state on an attractor.\n\nThere are two popular forms of the model: binary neurons with discrete time which is updated one at a time and continuous time graded neurons. In this paper, the second kind of model is used. The dynamics of a -neuron continuous Hopfield neural network is described by Here, is the input of neuron , and the output of neuron is where , is a positive constant, is external inputs (e.g., sensory input or bias current) to neuron and is sometimes called the “firing threshold\" when replaced with . is the mean internal potential of the neuron which determines the output of neuron . is the strength of synaptic input from neuron to neuron . is a monotone function that converts internal potential into firing rate input of the neuron. is the matrix with elements . When is symmetric, the Lyapunov energy function is given by where is the inverse of the gain function . There is a significant limiting case of this function when has no diagonal elements and the input-output relation becomes a step, going from to a maximum firing rate (for convenience, scaled to ). The third term of this Lyapnouv function is then zero or infinite. With no diagonal elements in , the minima of are all located at corners of the hypercube . In this limit, the states of the continuous variable system are stable.\n\nMany optimization problems can be readily represented using Hopfield nets by transforming the problem into variables such that the desired optimization corresponds to the minimization of the respective Lyapunov energy function . The dynamics of the HNN converges to a local Lyapunov energy minimum. If this local minimum is also the global minimum, the solution of the desired optimization task has been carried out by the convergence of the network state.\n\n##### 2.2. Basic PSO Theory\n\nMany real optimization problems can be formulated as the following functional optimization problem: Here is the objective function, and is the decision vector consisting of variables.\n\nThe original particle swarm algorithm works by iteratively searching in a region and is concerned with the best previous success of each particle, the best previous success of the particle swarm and the current position and velocity of each particle . Every candidate solution of is called a “particle.” The particle searches the domain of the problem according to where is the velocity of particle ; represents the position of particle ; represents the best previous position of particle (indicating the best discoveries or previous experience of particle ); represents the best previous position among all particles (indicating the best discovery or previous experience of the social swarm); is the inertia weight that controls the impact of the previous velocity of the particle on its current velocity and is sometimes adaptive ; and are two random weights whose components and () are chosen uniformly within the interval which might not guarantee the convergence of the particle trajectory; and are the positive constant parameters. Generally the value of each component in should be clamped to the range to control excessive roaming of particles outside the search space.\n\n#### 3. A Chaotic Hopfield Neural Network Model\n\nFrom the introduction of basic PSO theory, every particle can be seen as the model of a single fish or a single bird. The position chosen by the particle can be regarded as a state of a neural network with a random synaptic connection. According to (5)-(6), the position components of particle can be thought of as the output of a neural network as shown in Figure 1.\n\nIn Figure 1, and are two independent and uniformly distributed random variables within the range , which refer to and , respectively. and are the components of and , respectively. is the previous best value amongst all particles, and , as an externally applied input, is the element of the best previous position , and it is coupled with other components of . The particles migrate toward a new position according to (5)-(6). This process is repeated until a defined stopping criterion is met (e.g., maximum number of iterations or a sufficiently good fitness value).\n\nAs pointed out by Clerc and Kennedy , the powerful optimization ability of the PSO comes from the interaction amongst the particles. The analysis of complex interaction amongst the particles in the swarm is beyond the scope of this paper which focuses on the construction of a simple particle using a neural network perspective and convergence issues. Artificial neural networks are composed of simple artificial neurons mimicking biological neurons. The HNN has the property that as each neuron in a HNN updates, an energy function is monotonically reduced until the network stabilizes . One can therefore map an optimization problem to a HNN such that the cost function of the problem corresponds to the energy function of the HNN and the result of the HNN thus suggests a low cost solution to the optimization problem. The HNN might therefore be a good choice to model particle behavior.\n\nIn order to approach and , the HNN model should include at least two neurons. For simplicity, the HNN model of each particle position component has two neurons whose outputs are and . In order to transform the problem into variables such that the desired optimization corresponds to the minimization of the energy function, the objective function should be determined firstly. As and should approach and , respectively, and can be chosen as two parts of the energy function. The third part of energy function is added to accompany to cause to tend towards . Therefore the HNN Lyapunov energy function for each particle is proposed: where , , and are positive constants.\n\nHere the neuron input-output function is chosen as a sigmoid function, given by (9) and (11). Equations (8) and (10) are the Euler approximation of (1) of the continuous Hopfield neural network . The dynamics of component of particle is described by According to (5)-(6) and Figure 1, the PSO uses random weights to simulate birds flocking or fish searching for food. When birds flock or fish search for food, they exhibit chaos like behavior, yet (8)–(11) do not generate chaos. Aihara et al. proposed a kind of chaotic neuron, which includes relative refractoriness in the model to simulate chaos in a biological brain. To use this result and are added to (8) and (10) to cause chaos. Equations (8) and (10) then become In order to escape from chaos as time evolves, we set In (8)–(14): , , and are positive parameters; is self-feedback connection weight (the refractory strength); is the damping factor of the time-dependent , ; is a positive parameter.\n\nAll the parameters are fixed except which is varied.\n\nThe combination of (9), (11)–(14) is called chaotic Hopfield neural network swarm optimization(CHNNSO) proposed by us. According to (8)–(14), the following procedure can be used for implementing the proposed CHNNSO algorithm.(1) Initialize the swarm, assign a random position in the problem hyperspace to each particle, and calculate the fitness function which is given by the optimization problem whose variables are corresponding to the elements of particle position coordinates.(2) Synchronously update the positions of all the particles using (9), (11)–(14) and change the two states every iteration.(3) Evaluate the fitness function for each particle. (4) For each individual particle, compare the particle’s fitness value with its previous best fitness value. If the current value is better than the previous best value, then set this value as the and the current particle’s position, , as , else if the is updated, then reset . (5) Identify the particle that has the best fitness value. When iterations are less than a certain value, and if the particle with the best fitness value is changed then reset to keep the particles chaotic to prevent premature convergence.(6) Repeat steps until a stopping criterion is met (e.g., maximum number of iterations or a sufficiently good fitness value).\n\nAs can be seen from (9) and (11), the particle position component is located in the interval . The optimization problem variable interval must therefore be mapped to and vice versa using Here, and are the lower boundary and the upper boundary of , respectively, and only one particle is analyzed for simplicity.\n\n#### 4. Dynamics of Chaotic Hopfield Network Swarm Optimization\n\nIn this section, the dynamics of the chaotic Hopfield network swarm optimization (CHNNSO) is analyzed. The first subsection discusses the convergence of the chaotic particle swarm. The second subsection discusses the dynamics of the simplest CHNNSO with different parameter values.\n\n##### 4.1. Convergence of the Particle Swarm\n\nTheorem 1 (Wang and Smith ). If one has a network of neurons with arbitrarily increasing I/O functions, there exists a sufficient stability condition for a synchronous TCNN (transiently chaotic neural network) equation (12), namely, Here ( denotes the derivative with respect to time of the neural I/O function for neuron , in this paper is the sigmoid function (9). is the minimum eigenvalue of the connected weight matrix of the dynamics of a n-neuron continuous Hopfield neural network).\n\nTheorem 2. A sufficient stability condition for the CHNNSO model is .\n\nProof. When , It then follows that the equilibria of (12) and (13) can be evaluated by According to (7), we get In this paper, It is clear that and the stability condition (16) is satisfied when . The above analysis verifies Theorem 2.\n\nTheorem 3. The particles converge to the sphere with center point and radius ( is the final convergence equilibria, if the optimization problem is in two-dimensional plane, the particles are finally in a circle).\n\nIt is easy to show that the particle model given by (7) and (8)–(14) has only one equilibrium as , that is, . Hence, as , belongs to the hypersphere whose origin is and the radius is . Solving (9), (11), (19), and (20) simultaneously, we get With (23) and (24) satisfied, there must exist the final convergence equilibria and . So the best place the particle swarm can find is and radius is .\n\nThe above analysis therefore verifies Theorem 3.\n\n##### 4.2. Dynamics of the Simplest Chaotic Hopfield Neural Network Swarm\n\nIn this section, the dynamics of the simplest particle swarm model is analyzed. Equations (7) and (8)–(13) are the dynamic model of a single particle with subscript ignored. According to (7) and Theorem 3, the parameters , , and control the final convergent radius. According to trial and error, the parameters , , and can be chosen in the range from to . According to (16) and (22), and . In the simulation, the results are better when is in the neighborhood of and is in the neighborhood of . The parameters and control the time of the chaotic period. If is too big and/or is too small, the system will quickly escape from chaos and performance will be poor. The parameter is standard in the literature on chaotic neural networks. The simulation showed that the model is not sensitive to the values of parameters and , for example, and are feasible.\n\nThen the values of the parameters in (7)–(14) are set to Figure 2 shows the time evolution of , and the Lyapunov exponent of . The Lyapunov exponent characterizes the rate of separation of infinitesimally close trajectories. A positive Lyapunov exponent is usually taken as an indication that the system is chaotic . Here, is defined as At about 200 steps, decays to a small value and departs from chaos which corresponds with the change of from positive to negative.\n\nAccording to Figure 2, the convergence process of a simple particle position follows the nonlinear bifurcation making the particle converge to a stable fixed point from a strange attractor. In the following section, it is shown that the fixed point is determined by the best previous position among all particles and the best position of the individual particle.\n\nRemark 4. The proposed CHNNSO model is a deterministic Chaos-Hopfield neural network swarm which is different from existing PSOs with stochastic parameters. Its search orbits exhibit an evolutionary process of inverse period bifurcation from chaos to periodic orbits then to sink. As chaos is ergodic and the particle is always in a chaotic state at the beginning (e.g., in Figure 2), the particle can escape when trapped in local extrema. This proposed CHNNSO model will therefore in general not suffer from being easily trapped in a the local optimum and will continue to search for a global optimum.\n\n#### 5. Numerical Simulation\n\nTo test the performance of the proposed algorithms, two famous benchmark optimization problems and an engineering optimization problem with linear and nonlinear constraints are used. The solutions to the two benchmark problems can be represented in the plane and therefore the convergence of the CHNNSO can be clearly observed. The results of the third optimization problem when compared with other algorithms are displayed in Table 1. We will compare the CHNNSO with the original PSO .\n\n##### 5.1. The Rastrigin Function\n\nTo demonstrate the efficiency of the proposed technique, the famous Rastrigin function is chosen as a test problem. This function with two variables is given by The global minimum is −2 and the minimum point is (0, 0). There are about 50 local minima arranged in a lattice configuration.\n\nThe proposed technique is applied with a population size of 20 and the maximum number of iterations is 20000. The chaotic particle swarm parameters are chosen as , , , , , , , and .\n\nThe position of every particle is initialized with a random value. The time evolution of the cost of the Rastrigin function is shown in Figure 3. The global minimum at −2 is obtained by the best particle with .\n\nFrom Figure 3, it can be seen that the proposed method gives good optimization results. Since there are two variables in the Rastrigin function, it is easy to show the final convergent particle states in the plane.\n\nIn Figure 4, the “”s are the best experiences of each particle. The “+”s are the final states of the particles. The global minimum is also included in the “”s and the “+”s. Most “”s and “+”s are overlapped at the global minimum . According to Theorem 3, the particles will finally converge to a circle finally. For this Rastrigin problem, the particles’ final states converge to the circle as shown in Figure 4, and hence the global convergence of the particles is guaranteed.\n\nFigure 5 displays the results when the original PSO was used to optimize the Rastrigin function. In the numerical simulation, the particle swarm population size is also and parameters and are set to and set to . is set equal to the dynamic range of each dimension. The “”s in Figure 5 are the final states of all the particles corresponding to the “+”s in Figure 4. It is easy to see that the final states of the particles are ruleless even though the global minimum of −2 is obtained by the best experience of the particle swarm, that is, as shown in Figure 5.\n\nBy comparing the results obtained by the proposed CHNNSO in Figure 4 with the results of the original PSO in Figure 5, it can be seen that the final states of the particles of the proposed CHNNSO are attracted to the best experience of all the particles and that convergence is superior. The final states of CHNNSO particles are guaranteed to converge which is not the case for original PSO implications.\n\nWhen their parameters and are both set to and to a value of , is set equal to the dynamic range on each dimension. The constriction factors , , are applied to improve the convergence of the particle over time by damping the oscillations once the particle is focused on the best point in an optimal region. The main disadvantage of this method is that the particles may follow wider cycles and may not converge when the individual best performance is far from the neighborhoods best performance (two different regions) .\n\n##### 5.2. The Schaffer’s F6 Function\n\nTo further investigate the performance of the CHNNSO, the Schaffer’s F6 function is chosen. This function has a single global optimum at and , and a large number of local optima. The global optimum is difficult to find because the value at the best local optimum differs with only about from the global minimum. The local optima crowd around the global optimum. The proposed technique is applied with a population size of 30, the iterations are 100000, and the parameters of CHNNSO are chosen as\n\nThe position of each particle is initialized with a random value. In Figure 6, the “”s are the best experiences of each particle. The global minimum at is obtained by the best particle with which is included in the “”s. The “+”s are the final states of the particles. According to Theorem 3 the particles’ final states converge in a circle as shown in Figure 6 which proves global convergence. From Figure 6 it is clearly seen that the particles’ final states are attracted to the neighborhoods of the best experiences of all the particles and the convergence is good.\n\nFigure 7 shows the final particle states when the original PSO was used to optimize the Schaffer’s F6 function. In this numerical simulation of the original PSO, the particle swarm population size is also and parameters and are both set to and set is set to a value of . is set equal to the dynamic range of each dimension. The “”s in Figure 7 are the final states of all the particles corresponding to the “+”s in Figure 6. It is easy to see that the final states of the particles are ruleless in Figure 7. The global minimum is obtained by the best particle with , . The best experience from the original PSO is not as good as the best of the proposed CHNNSO.\n\nComparing the results obtained from the proposed CHNNSO in Figure 6 and the original PSO in Figure 7, it is clearly seen that the particles’ final states of the proposed CHNNSO are finally attracted to the best experience of all the particles and the convergence is better than that of the original PSO. The CHNNSO can guarantee the convergence of the particle swarm, but the final states of the original PSO are ruleless.\n\n##### 5.3. The Hartmann Function\n\nThe Hartmann function when ; is given by with belonging to Table 1 shows the parameter values for the Hartmann function when .\n\nWhen , , .\n\nThe time evolution of the cost of the Hartmann function is 15000. In Figure 8, only subdimensions are pictured. In Figure 8, the “+”s are the final states of the particles and the “*”s denote the best experiences of all particles. From Figure 8, it can be easily seen that the final states of the particles converge to the circle. The center point is and the radius is 0.1942. The final particle states confirm Theorem 3, and the final convergency is guaranteed.\n\n##### 5.4. Design of a Pressure Vessel\n\nThe pressure vessel problem described in [26, 27] is an example which has linear and nonlinear constraints and has been solved by a variety of techniques. The objective of the problem is to minimize the total cost of the material needed for forming and welding a cylindrical vessel. There are four design variables: (, thickness of the shell), (, thickness of the head), (, inner radius), and (, length of the cylindrical section of the vessel). and are integer multiples of 0.0625 inch, which are the available thickness of rolled steel plates, and and are continuous. The problem can be specified as follows: The following range of the variables were used : de Freitas Vas and de Graça Pinto Fernandes proposed an algorithm to deal with the constrained optimization problems. Here this algorithm is combined with CHNNSO to search for the global optimum. The proposed technique is applied with a population size of 20 and the maximum number of iterations is 20000. Then the values of the parameters in (7)–(14) are set to From Table 2, the best solution obtained by the CHNNSO is better than the other two solutions previously reported.\n\n Design variables Best solutions found This paper Hu et al. Coello \n\n#### 6. Conclusion\n\nThis paper proposed a chaotic neural networks swarm optimization algorithm. It incorporates the particle swarm searching structure having global optimization capability into Hopfield neural networks which guarantee convergence. In addition, by adding chaos generator terms into Hopfield neural networks, the ergodic searching capability is greatly improved in the proposed algorithm. The decay factor introduced in the chaos terms ensures that the searching evolves to convergence to global optimum after globally chaotic optimization. The experiment results of three classic benchmark functions showed that the proposed algorithm can guarantee the convergence of the particle swarm searching and can escape from local extremum. Therefore, the proposed algorithm improves the practicality of particle swarm optimization. As this is a general particle model, some techniques such as the local best version algorithm proposed in can be used together with the new model. This will be explored in future work.\n\n#### Acknowledgments\n\nThis work was supported by China/South Africa Research Cooperation Programme (nos. 78673 and CS06-L02), South African National Research Foundation Incentive Grant (no. 81705), SDUST Research Fund (no. 2010KYTD101), and Key scientific support program of Qingdao City (no. 11-2-3-51-nsh).\n\n1. J. C. Sprott, Chaos and Time-Series Analysis, Oxford University Press, New York, NY, USA, 2004. View at: Zentralblatt MATH | MathSciNet\n2. K. Aihara and G. Matsumoto, Chaos in Biological Systems, Plenum Press, New York, NY, USA, 1987.\n3. C. A. Skarda and W. J. Freeman, “How brains make chaos in order to make sense of the world,” Behavioral and Brain Sciences, vol. 10, pp. 161–165, 1987. View at: Google Scholar\n4. L. Wang, D. Z. Zheng, and Q. S. Lin, “Survey on chaotic optimization methods,” Computation Technology Automation, vol. 20, pp. 1–5, 2001. View at: Google Scholar\n5. B. Li and W. S. Jiang, “Optimizing complex functions by chaos search,” Cybernetics and Systems, vol. 29, no. 4, pp. 409–419, 1998. View at: Google Scholar\n6. J. J. Hopfield and D. W. Tank, “‘Neural’ computation of decisons in optimization problems,” Biological Cybernetics, vol. 52, no. 3, pp. 141–152, 1985. View at: Google Scholar | MathSciNet\n7. Z. Wang, Y. Liu, K. Fraser, and X. Liu, “Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays,” Physics Letters A, vol. 354, no. 4, pp. 288–297, 2006. View at: Publisher Site | Google Scholar\n8. T. Tanaka and E. Hiura, “Computational abilities of a chaotic neural network,” Physics Letters A, vol. 315, no. 3-4, pp. 225–230, 2003.\n9. K. Aihara, T. Takabe, and M. Toyoda, “Chaotic neural networks,” Physics Letters A, vol. 144, no. 6-7, pp. 333–340, 1990. View at: Publisher Site | Google Scholar | MathSciNet\n10. S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983.\n11. L. Chen and K. Aihara, “Chaotic simulated annealing by a neural network model with transient chaos,” Neural Networks, vol. 8, no. 6, pp. 915–930, 1995. View at: Publisher Site | Google Scholar\n12. L. Chen and K. Aihara, “Chaos and asymptotical stability in discrete-time neural networks,” Physica D, vol. 104, no. 3-4, pp. 286–325, 1997.\n13. L. Wang, “On competitive learning,” IEEE Transactions on Neural Networks, vol. 8, no. 5, pp. 1214–1217, 1997. View at: Publisher Site | Google Scholar\n14. L. Wang and K. Smith, “On chaotic simulated annealing,” IEEE Transactions on Neural Networks, vol. 9, no. 4, pp. 716–718, 1998. View at: Google Scholar\n15. M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at: Publisher Site | Google Scholar\n16. J. Ke, J. X. Qian, and Y. Z. Qiao, “A modified particle swarm optimization algorithm,,” Journal of Circuits and Systems, vol. 10, pp. 87–91, 2003. View at: Google Scholar\n17. B. Liu, L. Wang, Y. H. Jin, F. Tang, and D. X. Huang, “Improved particle swarm optimization combined with chaos,” Chaos, Solitons and Fractals, vol. 25, no. 5, pp. 1261–1271, 2005. View at: Publisher Site | Google Scholar\n18. T. Xiang, X. Liao, and K. W. Wong, “An improved particle swarm optimization algorithm combined with piecewise linear chaotic map,” Applied Mathematics and Computation, vol. 190, no. 2, pp. 1637–1645, 2007.\n19. C. Fan and G. Jiang, “A simple particle swarm optimization combined with chaotic search,” in Proceedings of the 7th World Congress on Intelligent Control and Automation (WCICA '08), pp. 593–598, Chongqing, China, June 2008. View at: Publisher Site | Google Scholar\n20. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, December 1995. View at: Google Scholar\n21. S. H. Chen, A. J. Jakeman, and J. P. Norton, “Artificial Intelligence techniques: an introduction to their use for modelling environmental systems,” Mathematics and Computers in Simulation, vol. 78, no. 2-3, pp. 379–400, 2008.\n22. J. J. Hopfield, “Hopfield network,” Scholarpedia, vol. 2, article 1977, 2007. View at: Google Scholar\n23. J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proceedings of the National Academy of Sciences of the United States of America, vol. 81, no. 10, pp. 2554–2558, 1984. View at: Google Scholar\n24. Y. del Valle, G. K. Venayagamoorthy, S. Mohagheghi, J. C. Hernandez, and R. G. Harley, “Particle swarm optimization: basic concepts, variants and applications in power systems,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 2, pp. 171–195, 2008. View at: Publisher Site | Google Scholar\n25. J. D. Schaffer, R. A. Caruana, L. J. Eshelman, and R. Das, “A study of control parameters affectiong online performance of genetic algorithms for function optimization,” in Proceedings of the 3rd International Conference on Genetic Algorithms, pp. 51–60, 1989. View at: Google Scholar\n26. A. I. de Freitas Vaz and E. M. da Graça Pinto Fernandes, “Optimization of nonlinear constrained particle swarm,” Technological and Economic Development of Economy, vol. 12, no. 1, pp. 30–36, 2006. View at: Google Scholar\n27. X. Hu, R. C. Eberhart, and Y. Shi, “Engineering optimization with particle swarm,” in Proceedings of the IEEE Swarm Intelligence Symposium, pp. 53–57, 2003. View at: Google Scholar\n28. C. A. C. Coello, “Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 11-12, pp. 1245–1287, 2002.\n29. J. Kennedy, “Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance,” Neural Networks, vol. 18, pp. 205–217, 1997. View at: Google Scholar",
null,
"Download other formatsMore",
null,
"",
null,
"Order printed copiesOrder",
null,
"",
null,
"Sign up for content alertsSign up\n\nYou are browsing a BETA version of Hindawi.com. Click here to switch back to the original design."
] | [
null,
"https://static-01.hindawi.com/next_assets/WWQLtdVHTkpIy3ivTphyo/_next/static/node_modules/hindawi-ui/src/icon/svg/downloadBlack.svg",
null,
"https://static-01.hindawi.com/next_assets/WWQLtdVHTkpIy3ivTphyo/_next/static/node_modules/hindawi-ui/src/icon/svg/chevronDownGreen.svg",
null,
"https://static-01.hindawi.com/next_assets/WWQLtdVHTkpIy3ivTphyo/_next/static/node_modules/hindawi-ui/src/icon/svg/orderPrintedCopies.svg",
null,
"https://static-01.hindawi.com/next_assets/WWQLtdVHTkpIy3ivTphyo/_next/static/node_modules/hindawi-ui/src/icon/svg/rightArrowGreen.svg",
null,
"https://static-01.hindawi.com/next_assets/WWQLtdVHTkpIy3ivTphyo/_next/static/node_modules/hindawi-ui/src/icon/svg/flag.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88618517,"math_prob":0.9210675,"size":33151,"snap":"2019-51-2020-05","text_gpt3_token_len":7378,"char_repetition_ratio":0.16668175,"word_repetition_ratio":0.0945871,"special_character_ratio":0.22551356,"punctuation_ratio":0.14324497,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9763249,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T06:28:31Z\",\"WARC-Record-ID\":\"<urn:uuid:4fb75522-3fcb-4783-a6a8-b03a41fc8fcd>\",\"Content-Length\":\"1049438\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:32d026cc-a8e2-4d13-81cc-8f3f0d1c2b43>\",\"WARC-Concurrent-To\":\"<urn:uuid:fdf04d6a-b312-4a26-8136-17408126d34a>\",\"WARC-IP-Address\":\"54.192.30.109\",\"WARC-Target-URI\":\"https://new.hindawi.com/journals/jam/2013/873670/\",\"WARC-Payload-Digest\":\"sha1:FZCHCLF5PEA2PQSE3HSI62CEDUE5YOQC\",\"WARC-Block-Digest\":\"sha1:VLTKS7UJY54W2HHF7JBRBDRXNWARPUPL\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250592261.1_warc_CC-MAIN-20200118052321-20200118080321-00227.warc.gz\"}"} |
https://mobile.developer.com/net/csharp/article.php/1574731/Formatting-Negative-Numbers-Differently-Than-Positive-in-NET.htm | [
"",
null,
"# Formatting Negative Numbers Differently Than Positive in .NET\n\nFormat specifiers are most often used with numbers. What happens if you want to format a variable differently if the number is negative versus positive? Learn the specifiers needed to make this happen in .NET.\n\nIn my last article, I presented formatting specifiers for dates and times. Format specifiers are most often used with numbers. For example, to print a decimal number to three levels of precision, you would use D3 as the specifer. The following uses this specifier within WriteLine:\n\n`System.Console.WriteLine( \"A Number: {0:D3}\", dVar);`\n\nWhile numeric specifiers work with both positive and negative numbers, there are times when you want a negative number treated differently than a positive number.\n\nThe placeholder for specifying the format of a value can actually be separated into either two or three sections. If the placeholder is separated into two sections, the first is for positive numbers and zero and the second is for negative numbers. If it is broken into three sections, the first is for positive values, the middle is for negative values, and the third is for zero.\n\nThe placeholder is broken into these sections using a semicolon. The placeholder number is then included in each. For example, to format a number to print with three levels of precision when positive, five levels when negative, and no levels when zero, you do the following:\n\n`{0:D3;D5;'0'}`\n\nListing 1 presents this example in action along with a couple of additional examples.\n\nListing 1 — threeway.cs.\n\n` 1: // threeway.cs - Controlling the formatting of numbers 2: //---------------------------------------------------- 3: 4: using System; 5: 6: class myApp 7: { 8: public static void Main() 9: {10: Console.WriteLine(\"\\nExample 1...\");11: for ( int x = -100; x <= 100; x += 100 )12: {13: Console.WriteLine(\"{0:000;-00000;'0'}\", x); 14: }15: 16: Console.WriteLine(\"\\nExample 2...\");17: for ( int x = -100; x <= 100; x += 100 )18: {19: Console.WriteLine(\"{0:Pos: 0;Neg: -0;Zero}\", x); 20: }21: 22: Console.WriteLine(\"\\nExample 3...\");23: for ( int x = -100; x <= 100; x += 100 )24: {25: Console.WriteLine(\"{0:You Win!;You Lose!;You Broke Even!}\", x);26: }27: }28: }`\n\nThis listing produces the following output:\n\n`Example 1...-001000100Example 2...Neg: -100ZeroPos: 100Example 3...You Lose!You Broke Even!You Win! `\n\nThis listing helps illustrate how to break the custom formatting into three pieces. A for loop is used to create a negative number, increment the number to zero, and finally increment it to a positive number. The result is that the same WriteLine can be used to display all three values. This is done three separate times for three different examples.\n\nIn line 13, you see that the positive value will be printed to at least three digits because there are three zeros in the first formatting position. The negative number will include a negative sign followed by at least 5 numbers. You know this because the dash is included in the format for the negative sign, and there are five zeros. If the value is equal to zero, a zero will be printed.\n\nIn the second example, text is included with the formatting of the numbers. This is also done in the third example. The difference is that in the second example, zero placeholders are also included so the actual numbers will print. This is not the case with the third example where only text is displayed.\n\nAs you can see by all three of these examples, it is easy to cause different formats to be used based on the sign (positive or negative) of a variable."
] | [
null,
"https://www.qsstats.com/dcs38irdn10000g0vc4171yva_9y7z/njs.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83674604,"math_prob":0.97058445,"size":3449,"snap":"2021-04-2021-17","text_gpt3_token_len":826,"char_repetition_ratio":0.15065312,"word_repetition_ratio":0.04964539,"special_character_ratio":0.28732967,"punctuation_ratio":0.20596206,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96582955,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T13:49:30Z\",\"WARC-Record-ID\":\"<urn:uuid:fc797392-f15c-4054-847d-4b0c623fe273>\",\"Content-Length\":\"52191\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f07bcc3a-cae6-4187-8273-f8480e0f434b>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc0edf4f-f51e-4252-8f95-817f199273ac>\",\"WARC-IP-Address\":\"173.226.108.182\",\"WARC-Target-URI\":\"https://mobile.developer.com/net/csharp/article.php/1574731/Formatting-Negative-Numbers-Differently-Than-Positive-in-NET.htm\",\"WARC-Payload-Digest\":\"sha1:IWUN43CSXWYZHFWUBNEOWF7G2MCVLCC7\",\"WARC-Block-Digest\":\"sha1:QIF63URRZHMKGACWD7NX3FVQAOAWHZ4Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703538082.57_warc_CC-MAIN-20210123125715-20210123155715-00348.warc.gz\"}"} |
https://squ.pure.elsevier.com/en/publications/can-heterogeneity-of-the-near-wellbore-rock-cause-extrema-of-the- | [
"# Can heterogeneity of the near-wellbore rock cause extrema of the Darcian fluid inflow rate from the formation (the Polubarinova-Kochina problem revisited)?\n\nYurii Obnosov, Rouzalia Kasimova, Ali Al-Maktoumi, Anvar Kacimov\n\nResearch output: Contribution to journalArticle\n\n12 Citations (Scopus)\n\n### Abstract\n\nDarcian steady 2-D flow to a point sink (vertical well) placed eccentrically with respect to two circles demarcating zones of contrasting permeability is studied by the methods of complex analysis and numerically by MODFLOW package. In the analytical approach, two conjugated Laplace equations for a characteristic flow function are solved by the method of images, i.e. the original sink is mirrored about two circles that generates an infinite system of fictitious sinks and source. The internal circle of the annulus models formation damage (gravel pack) near the well and the ring-shaped zone represents a pristine porous medium. On the external circle the head (pressure) is fixed and on the internal circle streamlines are refracted. The latter is equivalent to continuity of pressure and normal component of specific discharge that is satisfied by the choice of the intensity and loci of fictitious sinks. Flow net and dependence of the well discharge on eccentricity are obtained for different annulus radii and permeability ratios. A non-trivial minimum of the discharge is discovered for the case of the ring domain permeability higher than that of the internal circle. In the numerical solution, a finite difference code is implemented and compared with the analytical results for the two-conductivity zone. Numerical solution is also obtained for an aquifer with a three-conductivity zonation. The case of permeability exponentially varying with one Cartesian coordinate within a circular feeding contour is studied analytically by series expansions of a characteristic function obeying a modified Helmholtz equation with a point singularity located eccentrically inside the feeding contour. The coefficients of the modified Bessel function series are obtained by the Sommerfeld addition theorem. A trivial minimum of the flow rate into a small-radius well signifies the trade-off between permeability variation and short-cutting between the well and feeding contour.\n\nOriginal language English 1252-1260 9 Computers and Geosciences 36 10 https://doi.org/10.1016/j.cageo.2010.01.014 Published - Oct 2010\n\n### Fingerprint\n\ninflow\nRocks\npermeability\nFluids\nfluid\nrock\nBessel functions\nHelmholtz equation\nLaplace equation\nGravel\nAquifers\nconductivity\nPorous materials\nFlow rate\neccentricity\nzonation\nporous medium\ngravel\nrate\n\n### Keywords\n\n• Analytical and FDM solution\n• Complex potential\n• Helmholtz equation\n• Laplace equation\n• Refraction\n• Specific discharge\n\n### ASJC Scopus subject areas\n\n• Information Systems\n• Computers in Earth Sciences\n\n### Cite this\n\nIn: Computers and Geosciences, Vol. 36, No. 10, 10.2010, p. 1252-1260.\n\nResearch output: Contribution to journalArticle\n\n@article{ea5834b0af03416e9faea0374af02b31,\ntitle = \"Can heterogeneity of the near-wellbore rock cause extrema of the Darcian fluid inflow rate from the formation (the Polubarinova-Kochina problem revisited)?\",\nabstract = \"Darcian steady 2-D flow to a point sink (vertical well) placed eccentrically with respect to two circles demarcating zones of contrasting permeability is studied by the methods of complex analysis and numerically by MODFLOW package. In the analytical approach, two conjugated Laplace equations for a characteristic flow function are solved by the method of images, i.e. the original sink is mirrored about two circles that generates an infinite system of fictitious sinks and source. The internal circle of the annulus models formation damage (gravel pack) near the well and the ring-shaped zone represents a pristine porous medium. On the external circle the head (pressure) is fixed and on the internal circle streamlines are refracted. The latter is equivalent to continuity of pressure and normal component of specific discharge that is satisfied by the choice of the intensity and loci of fictitious sinks. Flow net and dependence of the well discharge on eccentricity are obtained for different annulus radii and permeability ratios. A non-trivial minimum of the discharge is discovered for the case of the ring domain permeability higher than that of the internal circle. In the numerical solution, a finite difference code is implemented and compared with the analytical results for the two-conductivity zone. Numerical solution is also obtained for an aquifer with a three-conductivity zonation. The case of permeability exponentially varying with one Cartesian coordinate within a circular feeding contour is studied analytically by series expansions of a characteristic function obeying a modified Helmholtz equation with a point singularity located eccentrically inside the feeding contour. The coefficients of the modified Bessel function series are obtained by the Sommerfeld addition theorem. A trivial minimum of the flow rate into a small-radius well signifies the trade-off between permeability variation and short-cutting between the well and feeding contour.\",\nkeywords = \"Analytical and FDM solution, Complex potential, Helmholtz equation, Laplace equation, Refraction, Specific discharge\",\nauthor = \"Yurii Obnosov and Rouzalia Kasimova and Ali Al-Maktoumi and Anvar Kacimov\",\nyear = \"2010\",\nmonth = \"10\",\ndoi = \"10.1016/j.cageo.2010.01.014\",\nlanguage = \"English\",\nvolume = \"36\",\npages = \"1252--1260\",\njournal = \"Computers and Geosciences\",\nissn = \"0098-3004\",\npublisher = \"Elsevier Limited\",\nnumber = \"10\",\n\n}\n\nTY - JOUR\n\nT1 - Can heterogeneity of the near-wellbore rock cause extrema of the Darcian fluid inflow rate from the formation (the Polubarinova-Kochina problem revisited)?\n\nAU - Obnosov, Yurii\n\nAU - Kasimova, Rouzalia\n\nAU - Al-Maktoumi, Ali\n\nAU - Kacimov, Anvar\n\nPY - 2010/10\n\nY1 - 2010/10\n\nN2 - Darcian steady 2-D flow to a point sink (vertical well) placed eccentrically with respect to two circles demarcating zones of contrasting permeability is studied by the methods of complex analysis and numerically by MODFLOW package. In the analytical approach, two conjugated Laplace equations for a characteristic flow function are solved by the method of images, i.e. the original sink is mirrored about two circles that generates an infinite system of fictitious sinks and source. The internal circle of the annulus models formation damage (gravel pack) near the well and the ring-shaped zone represents a pristine porous medium. On the external circle the head (pressure) is fixed and on the internal circle streamlines are refracted. The latter is equivalent to continuity of pressure and normal component of specific discharge that is satisfied by the choice of the intensity and loci of fictitious sinks. Flow net and dependence of the well discharge on eccentricity are obtained for different annulus radii and permeability ratios. A non-trivial minimum of the discharge is discovered for the case of the ring domain permeability higher than that of the internal circle. In the numerical solution, a finite difference code is implemented and compared with the analytical results for the two-conductivity zone. Numerical solution is also obtained for an aquifer with a three-conductivity zonation. The case of permeability exponentially varying with one Cartesian coordinate within a circular feeding contour is studied analytically by series expansions of a characteristic function obeying a modified Helmholtz equation with a point singularity located eccentrically inside the feeding contour. The coefficients of the modified Bessel function series are obtained by the Sommerfeld addition theorem. A trivial minimum of the flow rate into a small-radius well signifies the trade-off between permeability variation and short-cutting between the well and feeding contour.\n\nAB - Darcian steady 2-D flow to a point sink (vertical well) placed eccentrically with respect to two circles demarcating zones of contrasting permeability is studied by the methods of complex analysis and numerically by MODFLOW package. In the analytical approach, two conjugated Laplace equations for a characteristic flow function are solved by the method of images, i.e. the original sink is mirrored about two circles that generates an infinite system of fictitious sinks and source. The internal circle of the annulus models formation damage (gravel pack) near the well and the ring-shaped zone represents a pristine porous medium. On the external circle the head (pressure) is fixed and on the internal circle streamlines are refracted. The latter is equivalent to continuity of pressure and normal component of specific discharge that is satisfied by the choice of the intensity and loci of fictitious sinks. Flow net and dependence of the well discharge on eccentricity are obtained for different annulus radii and permeability ratios. A non-trivial minimum of the discharge is discovered for the case of the ring domain permeability higher than that of the internal circle. In the numerical solution, a finite difference code is implemented and compared with the analytical results for the two-conductivity zone. Numerical solution is also obtained for an aquifer with a three-conductivity zonation. The case of permeability exponentially varying with one Cartesian coordinate within a circular feeding contour is studied analytically by series expansions of a characteristic function obeying a modified Helmholtz equation with a point singularity located eccentrically inside the feeding contour. The coefficients of the modified Bessel function series are obtained by the Sommerfeld addition theorem. A trivial minimum of the flow rate into a small-radius well signifies the trade-off between permeability variation and short-cutting between the well and feeding contour.\n\nKW - Analytical and FDM solution\n\nKW - Complex potential\n\nKW - Helmholtz equation\n\nKW - Laplace equation\n\nKW - Refraction\n\nKW - Specific discharge\n\nUR - http://www.scopus.com/inward/record.url?scp=77956738367&partnerID=8YFLogxK\n\nUR - http://www.scopus.com/inward/citedby.url?scp=77956738367&partnerID=8YFLogxK\n\nU2 - 10.1016/j.cageo.2010.01.014\n\nDO - 10.1016/j.cageo.2010.01.014\n\nM3 - Article\n\nAN - SCOPUS:77956738367\n\nVL - 36\n\nSP - 1252\n\nEP - 1260\n\nJO - Computers and Geosciences\n\nJF - Computers and Geosciences\n\nSN - 0098-3004\n\nIS - 10\n\nER -"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88455003,"math_prob":0.9265893,"size":7536,"snap":"2019-51-2020-05","text_gpt3_token_len":1660,"char_repetition_ratio":0.1125863,"word_repetition_ratio":0.8172043,"special_character_ratio":0.19758493,"punctuation_ratio":0.07766213,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9642706,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T06:40:18Z\",\"WARC-Record-ID\":\"<urn:uuid:3a5f59fd-82a8-49f3-8078-baddcaaaceb5>\",\"Content-Length\":\"49491\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e3e1e3a5-cf70-41c2-a7fc-c218d40c5ef5>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ce3b243-2560-40e0-83ed-8c440076b670>\",\"WARC-IP-Address\":\"52.51.22.49\",\"WARC-Target-URI\":\"https://squ.pure.elsevier.com/en/publications/can-heterogeneity-of-the-near-wellbore-rock-cause-extrema-of-the-\",\"WARC-Payload-Digest\":\"sha1:NMRUHZWJRUBFZAGCKY7SFLUFPQAMPKEY\",\"WARC-Block-Digest\":\"sha1:ASELS5E7TT4BW3BUVV3ASXXCGMPUINHW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250592261.1_warc_CC-MAIN-20200118052321-20200118080321-00123.warc.gz\"}"} |
https://play.google.com/store/apps/details?id=com.faadooengineers.free_mathsforengineers | [
"",
null,
"# Engineering mathematics",
null,
"Everyone\n815\nThe app is a complete free handbook of Engineering mathematics with diagrams and graphs. It is part of engineering education which brings important topics, notes, news & blog on the subject. Download the App as quick reference guide & ebook on this Engineering mathematics subject.\n\nIt covers 80 topics of Maths in detail. These 80 topics are divided in 5 chapters.\n\nIt provides quick revision and reference to the topics like a detailed flash card. Each topic is complete with diagrams, equations and other forms of graphical representations for easy understanding.\n\nThe application serves to both engineering students and professionals. Some of topics Covered in this application are:\n\n1. Leibnitz Theorem\n2. Problems on Leibnitz Theorem\n3. Differential Calculus-I\n4. Radius of Curvature\n5. Radius of Curvature in Parametric Form\n6. Problems on Radius of Curvature\n7. Radius of Curvature in Polar Form\n8. Cauchy’s Mean Value Theorem\n9. Taylor’s Theorem\n10. Problems on Fundamental Theorem\n11. Partial Derivatives\n12. Euler Lagrange Equation\n13. Curve Tracing\n14. Change of Variable Theorem\n15. Problems on Differential Calculus I\n16. Indeterminate Forms\n17. Problems on L' Hospital Rule\n18. Various Indeterminate Forms\n19. Problems on Various Indeterminate Forms\n20. Taylor’s Theorem For Function of Two Variables\n21. Problems on Taylor's Theorem\n22. Maxima And Minima of Function of Two Variables\n23. Problems Maxima And Minima of Function of Two Variables\n24. Lagrange’s Method of Undetermined Multipliers\n25. Problems on Lagrange’s Method of Undetermined Multipliers\n26. Polar Curves\n27. Problems on Polar Curve\n28. Jacobian of Transformation\n29. Extrema of Function of Several Variables\n30. Problems on Differential Calculus II\n31. Multiple Integrals\n32. Problems on Multiple Integral\n33. Double Integral by Changing the Order of Integration\n34. Applications to Area and Volume\n35. Problems on Applications to Area and Volume\n36. Beta And Gamma Function\n37. Relationship between Beta and Gamma Functions\n38. Problems on Beta and Gamma Functions\n39. Dirichlet Integral\n40. Dirichlet Integral and Fourier Series\n41. Problems on Dirichlet Integrals\n42. Triple Integrals\n43. Triple Integrals using Cylindrical Coordinates\n44. Problems on Integrals\n45. Objective Question on Integrals\n46. Vector Functions\n47. Vector Line Integral\n48. Green's Theorem\n49. Gauss Divergence Theorem\n50. Stoke Theorem\n51. Surface and Volume Integrals\n52. Problems on Integrals Theorem\n53. Directional Derivative of Vector\n55. Theorem of Line Integral\n56. Orthogonal Curvilinear Coordinates\n57. Differential Operators\n58. Divergence of Vector\n59. Curl of Vector\n60. Problems on Vector Calculus\n61. Introduction of Matrices\n62. Properties of Matrices\n63. Scalar Multiplication\n64. Matrix Multiplication\n65. Transpose of Matrix\n66. Nonsingular Matrix\n67. Echelon Form of Matrix\n68. Determinant\n69. Properties of Determinants\n70. System of Linear Equation\n71. Solution to a Linear System\n72. Solution to Linear System by Inverse Method\n73. Rank and Trace of Matrix\n74. Cayley-Hamilton Theorem\n75. Eigenvalues and Eigenvector\n76. Method of Finding Eigenvalues and Eigenvectors\n77. Diagonalisation of Matrices\n78. Unitary Matrices\n79. Idempotent Matrices\n80. Problems on Matrices\nCollapse\n\nReview Policy\n4.0\n815 total\n5\n4\n3\n2\n1\n\n## What's New\n\nCheck out Top Learning Videos! We have Added\n• Chapter and topics made offline access\n• New Intuitive Knowledge Test & Score Section\n• Search Option with auto-prediction to get straight the your topic\n• Fast Response Time of Application\n• Provide Storage Access for Offline Mode\nCollapse\n\nUpdated\nJanuary 31, 2019\nSize\nVaries with device\nInstalls\n100,000+\nCurrent Version\n7\nRequires Android\n4.0 and up\nContent Rating\nEveryone\nPermissions\nOffered By\nEngineering Apps\nDeveloper",
null,
""
] | [
null,
"https://lh3.googleusercontent.com/3KGTgjZNr7wJHTmT8MGw-8pyQpKiI7YUkhyaIVIMGzNS7GlmE3No8l_A9dqaqlG9Zps=s180",
null,
"https://lh3.googleusercontent.com/IciOnDFecb5Xt50Q2jlcNC0LPI7LEGxNojroo-s3AozcyS-vDCwtq4fn7u3wZmRna8OewG9PBrWC-i7i=s14",
null,
"https://lh3.googleusercontent.com/9rMgqgoFXUGFDWbBTW6OUflDeaqT09pnKOWYNh-8YKiLMghr7o4MGhZXVnX4wZqAL24=w500-h280",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67244923,"math_prob":0.81471163,"size":3270,"snap":"2019-35-2019-39","text_gpt3_token_len":784,"char_repetition_ratio":0.16319656,"word_repetition_ratio":0.024489796,"special_character_ratio":0.23027523,"punctuation_ratio":0.15566836,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9903328,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T18:59:32Z\",\"WARC-Record-ID\":\"<urn:uuid:64b62593-de27-45b7-ad22-3a9ee8a9b612>\",\"Content-Length\":\"609504\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43a5505e-e46b-4634-bbb4-6eac823de9e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:63e24416-6709-41a1-9901-065d89c8132d>\",\"WARC-IP-Address\":\"172.217.7.238\",\"WARC-Target-URI\":\"https://play.google.com/store/apps/details?id=com.faadooengineers.free_mathsforengineers\",\"WARC-Payload-Digest\":\"sha1:XJ2MG5J7Y7EKWSSMTRUEPNAE7MQ2CIXX\",\"WARC-Block-Digest\":\"sha1:ZXKHS3FKFOT3I5OTUTVRIB2A7W3T2W4T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574058.75_warc_CC-MAIN-20190920175834-20190920201834-00375.warc.gz\"}"} |
https://jp.maplesoft.com/support/help/Maple/view.aspx?path=algcurves/AbelMap | [
"",
null,
"Abel map - Maple Help\n\nalgcurves\n\n AbelMap\n compute the Abel map between two points on a Riemann surface",
null,
"Calling Sequence AbelMap(F, x, y, P, P_0, t, accuracy)",
null,
"Parameters\n\n F - irreducible polynomial in x and y specifying a Riemann surface by F(x,y) = 0 x - variable y - variable P - Puiseux representation, in a parameter t of a point on the Riemann surface specified by F(x,y)=0 P_0 - same as P accuracy - number of desired accurate decimal digits",
null,
"Description\n\n • The AbelMap command computes the Abel map between two points P and P_0 on a Riemann surface R of genus g, that is a g-tuple of complex numbers. The jth element of the Abel map is the integral of the jth normalized holomorphic differential integrated along a path from P to P_0.\n • The Riemann surface is entered as F; an irreducible, square-free polynomial in x and y. Floating point numbers are not allowed as coefficients of F. Algebraic numbers are allowed. Curves of arbitrary finite genus with arbitrary singularities are allowed.\n • The points P and P_0 are entered as $[x=a+b{t}^{r},y=\\left(\\mathrm{Laurent series}\\mathrm{in}t\\right)]$, where a and b are constants, and r is an integer. If r < 0, that is, if entering one of the points for $x=\\mathrm{\\infty }$, then a = 0.\n • The differentials are normalized such that the jth differential integrated around the kth cycle, as given by algcurves[homology], is Kronecker delta (j, k).\n Note: The Abel map will almost always be computed along with other objects associated with some polynomial F, such as the Riemann matrix. It is imperative that the order of the differential be the same for each of the objects, and at each stage of the calculation. As no order is imposed by algcurves[differentials], make sure to compute AbelMap and, for instance algcurves[periodmatrix], without a restart (or quit) in between.",
null,
"Notes\n\n • This command is based on code written by Bernard Deconinck, Michael A. Nivala, and Matthew S. Patterson.",
null,
"Examples\n\n > $\\mathrm{with}\\left(\\mathrm{algcurves},\\mathrm{AbelMap},\\mathrm{genus},\\mathrm{puiseux}\\right)$\n $\\left[{\\mathrm{AbelMap}}{,}{\\mathrm{genus}}{,}{\\mathrm{puiseux}}\\right]$ (1)\n > $f≔{y}^{2}-\\left({x}^{2}-1\\right)\\left({x}^{2}-4\\right)\\left({x}^{2}-9\\right)\\left({x}^{2}-16\\right)$\n ${f}{≔}{{y}}^{{2}}{-}\\left({{x}}^{{2}}{-}{1}\\right){}\\left({{x}}^{{2}}{-}{4}\\right){}\\left({{x}}^{{2}}{-}{9}\\right){}\\left({{x}}^{{2}}{-}{16}\\right)$ (2)\n\nGive a look first at the genus\n\n > $\\mathrm{genus}\\left(f,x,y\\right)$\n ${3}$ (3)\n > $\\mathrm{puiseux}\\left(f,x=1,y,0,t\\right)$\n $\\left\\{\\left[{x}{=}{-}{720}{}{{t}}^{{2}}{+}{1}{,}{y}{=}{-}{720}{}{t}\\right]\\right\\}$ (4)\n > $\\mathrm{puiseux}\\left(f,x=4,y,0,t\\right)$\n $\\left\\{\\left[{x}{=}{10080}{}{{t}}^{{2}}{+}{4}{,}{y}{=}{10080}{}{t}\\right]\\right\\}$ (5)\n\nCompute the Abel map for this curve\n\n > $\\mathrm{P_0},P≔\\mathrm{op}\\left(\\right),\\mathrm{op}\\left(\\right)$\n ${\\mathrm{P_0}}{,}{P}{≔}\\left[{x}{=}{-}{720}{}{{t}}^{{2}}{+}{1}{,}{y}{=}{-}{720}{}{t}\\right]{,}\\left[{x}{=}{10080}{}{{t}}^{{2}}{+}{4}{,}{y}{=}{10080}{}{t}\\right]$ (6)\n > $A≔\\mathrm{AbelMap}\\left(f,x,y,P,\\mathrm{P_0},t,7\\right)$\n ${A}{≔}\\left[{-0.5086732390}{-}{1.395818333}{}{I}{,}{0.5158465233}{+}{0.3733240360}{}{I}{,}{0.00716997551}{-}{0.3585270829}{}{I}\\right]$ (7)"
] | [
null,
"https://bat.bing.com/action/0",
null,
"https://jp.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://jp.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://jp.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://jp.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://jp.maplesoft.com/support/help/Maple/arrow_down.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85741264,"math_prob":0.99991834,"size":2419,"snap":"2022-05-2022-21","text_gpt3_token_len":794,"char_repetition_ratio":0.1126294,"word_repetition_ratio":0.011299435,"special_character_ratio":0.2670525,"punctuation_ratio":0.15185185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99897146,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-17T15:48:09Z\",\"WARC-Record-ID\":\"<urn:uuid:60a9e3d7-35c0-4b4a-8319-b78b054628e6>\",\"Content-Length\":\"256050\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d695d26-2452-47c6-b0ea-9abb4329d170>\",\"WARC-Concurrent-To\":\"<urn:uuid:3b4088c5-7718-4cdd-a479-1f43b76df5ca>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://jp.maplesoft.com/support/help/Maple/view.aspx?path=algcurves/AbelMap\",\"WARC-Payload-Digest\":\"sha1:2QAC5QKM2SBFETPCFDMPDR2W6ZMRTUFD\",\"WARC-Block-Digest\":\"sha1:5XO4TKRZC5TM4I2RKJIS7MW7SXVHFPTY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300574.19_warc_CC-MAIN-20220117151834-20220117181834-00236.warc.gz\"}"} |
https://eqarchives.wordpress.com/2013/11/ | [
"## Archive for November, 2013\n\nNovember 22, 2013\n\n## Details\n\nDate/Time (UTC): 12/10/2013 13:11:53\nLat: 35.5042 Long: 23.2773\nMagnitude: 6.2ML [noa], 6.7 Mw [gcmt], 6.4Mw [emsc]\nDepth: 65.2\nLocation: 40 Km S from Andikithira, Crete\nEnergy Released: 30,089.0395TTNT\nCatalog Source: National Observatory of Athens\n.\n\n## summary\n\nmag1= 3 , mag2= 33 , mag3= 9\nmag4= 0 , mag5= 0 , mag6= 1\ntotal= 46\ntotal energy released= 30,109.366 TTNT\n\n## GCMT MTS",
null,
"## summary\n\nmag1= 20 , mag2= 15 , mag3= 2\nmag4= 0 , mag5= 0 , mag6= 0\ntotal= 37\ntotal energy released= 12.450 TTNT\n\n## summary\n\nmag1= 23 , mag2= 48 , mag3= 11\nmag4= 0 , mag5= 0 , mag6= 1\ntotal= 83\ntotal energy released= 30,121.816 TTNT\n\n## summary\n\nmag1= 19 , mag2= 35 , mag3= 6\nmag4= 0 , mag5= 0 , mag6= 0\ntotal= 60\ntotal energy released= 12.048 TTNT\n\n## summary\n\nmag1= 42 , mag2= 83 , mag3= 17\nmag4= 0 , mag5= 0 , mag6= 1\ntotal= 143\ntotal energy released= 30,133.863 TTNT\n\n## Aftershock Graph 19 days",
null,
".\n\n## Felt Reports [emsc]\n\nEMSC, click on image to go to interactive map page.",
null,
""
] | [
null,
"https://dl.dropboxusercontent.com/u/1132313/GCMT/CRETE%2CGREECE2013-10-126-7Mw.png",
null,
"https://dl.dropboxusercontent.com/u/1132313/WEA/Greece/AndikithiraAftershockGraph.png",
null,
"https://dl.dropboxusercontent.com/u/1132313/WEA/Greece/AndikithirafeltReportsSSemsc.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5740772,"math_prob":0.9822395,"size":1167,"snap":"2020-10-2020-16","text_gpt3_token_len":512,"char_repetition_ratio":0.16509028,"word_repetition_ratio":0.14220184,"special_character_ratio":0.47386461,"punctuation_ratio":0.2086331,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9694816,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-18T01:15:14Z\",\"WARC-Record-ID\":\"<urn:uuid:c0f9b012-8100-4a16-a473-3c1a91af764b>\",\"Content-Length\":\"81309\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a089039c-1f9f-4325-968c-eea28d7f03b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:799df2a6-9b75-413f-8101-1ba9a61aae66>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://eqarchives.wordpress.com/2013/11/\",\"WARC-Payload-Digest\":\"sha1:BGRLVS7CI2X5HOTXRQREJGBVCXHHNVH3\",\"WARC-Block-Digest\":\"sha1:YWNW2ZFHFNEJGFKUCAAOVX5HWHB5XQJT\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143455.25_warc_CC-MAIN-20200217235417-20200218025417-00244.warc.gz\"}"} |
https://gitlab.com/competitive-programming/contest/-/commit/1e6e8b0f0aad885a4c4a1f64a1e583519bab7cec | [
"* added mobius inversion\n* Added floor formula\n ... ... @@ -48,3 +48,17 @@ $\\sum_{d|n} d = O(n \\log \\log n)$. The number of divisors of $n$ is at most around 100 for $n < 5e4$, 500 for $n < 1e7$, 2000 for $n < 1e10$, 200\\,000 for $n < 1e19$. \\section{Mobius Function} $\\mu(n) = \\begin{cases} 0 & n \\textrm{ is not square free}\\\\ 1 & n \\textrm{ has even number of prime factors}\\\\ -1 & n \\textrm{ has odd number of prime factors}\\\\\\end{cases}$ Mobius Inversion: $g(n) = \\sum_{d|n} f(d) \\Leftrightarrow f(n) = \\sum_{d|n} \\mu(d)g(n/d)$ Other useful formulas/forms: $\\sum_{d | n} \\mu(d) = [ n = 1]$ (very useful) $g(n) = \\sum_{n|d} f(d) \\Leftrightarrow f(n) = \\sum_{n|d} \\mu(d/n)g(d)$ $g(n) = \\sum_{1 \\leq m \\leq n} f(\\left\\lfloor\\frac{n}{m}\\right \\rfloor ) \\Leftrightarrow f(n) = \\sum_{1\\leq m\\leq n} \\mu(m)g(\\left\\lfloor\\frac{n}{m}\\right\\rfloor)$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5014436,"math_prob":1.0000082,"size":1068,"snap":"2020-24-2020-29","text_gpt3_token_len":438,"char_repetition_ratio":0.12593985,"word_repetition_ratio":0.04733728,"special_character_ratio":0.42228463,"punctuation_ratio":0.07619048,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000094,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T18:48:00Z\",\"WARC-Record-ID\":\"<urn:uuid:4761d43d-76a5-4a52-a4dc-05c35ce69dd2>\",\"Content-Length\":\"83122\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bdac94dc-6136-4339-85ed-6546322f15ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:7881d072-6ff4-49b3-af08-7cb51378ba1c>\",\"WARC-IP-Address\":\"172.65.251.78\",\"WARC-Target-URI\":\"https://gitlab.com/competitive-programming/contest/-/commit/1e6e8b0f0aad885a4c4a1f64a1e583519bab7cec\",\"WARC-Payload-Digest\":\"sha1:TEWH3VSHNU5PC3NHAOHR4A4NIIYE3S4V\",\"WARC-Block-Digest\":\"sha1:AOHN2QUTGKIB7KMGL5PXAEMCC2MUZNQP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655881763.20_warc_CC-MAIN-20200706160424-20200706190424-00267.warc.gz\"}"} |
https://www.transtutors.com/questions/fixed-costs-labor-variable-cost-total-costs-avg-fixed-cost-avg-variable-cost-avg-tot-3246429.htm | [
"# Fixed Costs Labor Variable Cost Total Costs Avg Fixed Cost Avg Variable Cost Avg Total Costs...\n\n Fixed Costs Labor Variable Cost Total Costs Avg Fixed Cost Avg Variable Cost Avg Total Costs Marginal Cost Total Revenue Marginal Revenue Profit Q P FC L VC TC AFC AVC ATC MC TR MR ? given given given given W*L FC+VC FC/Q VC/Q TC/Q ?TC/?Q P*Q ?TR/?Q TR - TC 0 - $2,000,000 0$0 $2,000,000 - - - -$0 - ($2,000,000) 5000$500 $2,000,000 2500$750,000 $2,750,000$400 $150$550 $150$2,500,000 $500 ($250,000) 10000 $475$2,000,000 4000 $1,200,000$3,200,000 $200$120 $320$90 $4,750,000$450 $1,550,000 15000$450 $2,000,000 5000$1,500,000 $3,500,000$133 $100$233 $60$6,750,000 $400$3,250,000 20000 $425$2,000,000 5500 $1,650,000$3,650,000 $100$83 $183$30 $8,500,000$350 $4,850,000 25000$400 $2,000,000 6500$1,950,000 $3,950,000$80 $78$158 $60$10,000,000 $300$6,050,000 30000 $375$2,000,000 8000 $2,400,000$4,400,000 $67$80 $147$90 $11,250,000$250 $6,850,000 35000$350 $2,000,000 10000$3,000,000 $5,000,000$57 $86$143 $120$12,250,000 $200$7,250,000 40000 $325$2,000,000 12500 $3,750,000$5,750,000 $50$94 $144$150 $13,000,000$150 $7,250,000 45000$300 $2,000,000 15500$4,650,000 $6,650,000$44 $103$148 $180$13,500,000 $100$6,850,000 50000 $275$2,000,000 19000 $5,700,000$7,700,000 $40$114 $154$210 $13,750,000$50 $6,050,000 55000$250 $2,000,000 25000$7,500,000 $9,500,000$36 $136$173 $360$13,750,000 $0$4,250,000 60000 $225$2,000,000 32000 $9,600,000$11,600,000 $33$160 $193$420 $13,500,000 ($50) \\$1,900,000\n\nIs this a perfectly competitive firm? If not, what type of firm, do you think it is? Why do you say that?\n\nWhat would be the firm’s output level and price? Why, what rule does the firm use to pick its output?",
null,
""
] | [
null,
"https://files.transtutors.com/resources/images/answer-blur.webp",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8262306,"math_prob":1.0000098,"size":2568,"snap":"2020-10-2020-16","text_gpt3_token_len":1020,"char_repetition_ratio":0.1649766,"word_repetition_ratio":0.0,"special_character_ratio":0.53543615,"punctuation_ratio":0.21091445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997028,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-28T21:19:32Z\",\"WARC-Record-ID\":\"<urn:uuid:8283b7fc-c4fc-47aa-bb07-70adb65829be>\",\"Content-Length\":\"83043\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6ab2784-77d5-4143-86f3-90dce17d8498>\",\"WARC-Concurrent-To\":\"<urn:uuid:733a2bd5-95f8-48a6-9b0c-15ddc325c56a>\",\"WARC-IP-Address\":\"35.199.55.187\",\"WARC-Target-URI\":\"https://www.transtutors.com/questions/fixed-costs-labor-variable-cost-total-costs-avg-fixed-cost-avg-variable-cost-avg-tot-3246429.htm\",\"WARC-Payload-Digest\":\"sha1:YGRMQXME7EUNYJTCQIBQHX3B2AVMMQV6\",\"WARC-Block-Digest\":\"sha1:M6AYA4TTBM62ITT4B6LT72EKTAWMMVPH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875147647.2_warc_CC-MAIN-20200228200903-20200228230903-00148.warc.gz\"}"} |
http://vinc17.net/research/fptest.en.html | [
"",
null,
"# Floating-Point Arithmetic Test Programs\n\nHere are a few C programs to test the floating-point arithmetic of your machine:\n\ntst-ieee754.c\n\nMiscellaneous tests of the floating-point arithmetic: test of the evaluation of the floating-point expressions in C (showing a bug in gcc when the processor is configured in extended precision, e.g. under Linux/x86), test of the addition and the multiplication with signed zeros, test of the power function `pow`. The results on various machines (compressed archive) and a partial summary (text file).\n\nnearestint.c\n\nTest of the nearest integer functions, in the four IEEE-754 rounding modes: cast to int type, `trunc`, `floor`, `ceil`, `round`, `nearbyint`, `rint`. These tests show a bug in the PowerPC implementation of the `rint` function in the glibc (fixed in the CVS on 6 January 2005).\n\ntestatan.c\n\nTest of the arc tangent function `atan` (see Debian bug 210613).\n\ntestlog.c\n\nTest of the logarithm function `log` (see Debian bug 210400).\n\ntestsin.c\n\nTest of the sine function `sin`.\n\noverflow.c\n\nTest of the overflow behavior with gcc and intermediate extended precision (e.g. under Linux/x86).\n\ntestlgamma.c\n\nTest of the gamma functions `tgamma`, `lgamma` and `lgamma_r`.\n\nremquo.c\n\nTest of the `remquo` function.\n\nfma-tests.c\n\nTest of the `fma` function (see glibc bug 3268 and Debian bug 372544).\n\ncontract2fma.c\n\nTest the effect of the contraction to FMA by computing `sqrt (a * a - b * b)` with a = b (the code is written in such a way that this equality should not be used for optimizing at compile time). Under some conditions, the ISO C99 standard allows floating expressions to be contracted: for instance, the expression `x * y + z` can be contracted into a single operation (with only one rounding) on processors that have a FMA. In general, this improves the accuracy of the numerical computations, but this can also have the opposite effect, and even yield side effects, as in the above code, where the symmetry is broken by the introduction of a FMA. That's why the contraction of floating expressions can be disabled by the FP_CONTRACT pragma; in this case, the result of this program should always be 0. But GCC up to 4.8 ignores FP_CONTRACT pragma set to OFF, so that the program gives incorrect results on PowerPC, IA-64 (Itanium), and recent x86_64 processors. Worse, even on the general (non-trivial and possibly useful in practice) case `a >= b ? sqrt (a * a - b * b) : 0`, one can get a NaN.\n\nFor instance, when running contract2fma (compiled with GCC 4.8 or below on one of the architectures mentioned above) on 1, 1.1 and 1.2, one gets:\n\n```Test of a >= b ? sqrt (a * a - b * b) : 0 with FP_CONTRACT OFF\ntest(1) = 0\ntest(1.1000000000000000888) = 2.9802322387695326562e-09\ntest(1.1999999999999999556) = nan```\n\nNote: With GCC 4.9 and later, one needs to compile in ISO C mode (-std=c99 or -std=c11) to have contraction of floating expressions disabled by default (necessary since GCC still ignores the FP_CONTRACT pragma).\n\nrndmode.c\n\nMeasure the `fegetround`/`fesetround` performance when the rounding mode doesn't change. See glibc very old thread, and in particular this message.\n\ntransf-generated.c\n\nTest various simple expressions that can lead to transformations by the compiler in order to optimize. Such transformations are valid on the real numbers, but can here be invalid due to special IEEE-754 values (NaN, infinities, signed zeros).\n\nThe results can be affected by compiler options related to optimization of floating-point expressions. See an example of use with GCC (details about floating-point with GCC). In errors, y0 is the obtained value and y1 is the expected (correct) value. Note: it is assumed that the `volatile` qualifier has the effect to disable the optimizations, otherwise nothing is tested (see the source).\n\nThis C file is actually generated from transformations.c with the gen-transf Perl script.\n\[email protected]"
] | [
null,
"http://vinc17.net/graphics/french.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85391825,"math_prob":0.8428178,"size":3633,"snap":"2023-14-2023-23","text_gpt3_token_len":873,"char_repetition_ratio":0.1325434,"word_repetition_ratio":0.036124796,"special_character_ratio":0.25406,"punctuation_ratio":0.121640734,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98904276,"pos_list":[0,1,2],"im_url_duplicate_count":[null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T04:03:16Z\",\"WARC-Record-ID\":\"<urn:uuid:233d7ac7-be4d-4f24-8747-f31b21fa6516>\",\"Content-Length\":\"11862\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac5a76e8-6d7c-4469-8f8b-6e583470d631>\",\"WARC-Concurrent-To\":\"<urn:uuid:de28cfe6-9e08-4838-a0e8-d3eb11640d58>\",\"WARC-IP-Address\":\"155.133.131.76\",\"WARC-Target-URI\":\"http://vinc17.net/research/fptest.en.html\",\"WARC-Payload-Digest\":\"sha1:OHC6YKTAG4KXOULE4QNRR2TNBGIVIPN2\",\"WARC-Block-Digest\":\"sha1:EIIYIPXQTHLLASI556HM3NJOECGJC2O6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948932.75_warc_CC-MAIN-20230329023546-20230329053546-00625.warc.gz\"}"} |
https://www.statsdirect.com/help/methods/relationship_two_variables_measured_on_same_group.htm | [
"# Relationship Between Two Variables Measured on the Same Group\n\nSelect the appropriate combination of measurement scales (key below):\n\n Variable X Variable Y INTERVAL (NORMAL)+ Interval (normal) Interval (non-normal) Ordinal Nominal (ordered) Nominal (no order) Dichotomous INTERVAL (NON-NORMAL)+ Interval (normal) Interval (non-normal) Ordinal Nominal (ordered) Nominal (no order) Dichotomous ORDINAL + Interval (normal) Interval (non-normal) Ordinal Nominal (ordered) Nominal (no order) Dichotomous NOMINAL (ORDERED)+ Interval (normal) Interval (non-normal) Ordinal Nominal (ordered) Nominal (no order) Dichotomous NOMINAL (NO ORDER)+ Interval (normal) Interval (non-normal) Ordinal Nominal (ordered) Nominal (no order) Dichotomous DICHOTOMOUS + Interval (normal) Interval (non-normal) Ordinal Nominal (ordered) Nominal (no order) Dichotomous\n\nKey\n\nINTERVAL is a scale with a fixed and defined interval e.g. temperature in degrees Celsius, so the difference between the 5th and 6th points on the scale is identical to the difference between the 8th and 9th etc. Ratio scales possess all properties of interval scales plus an absolute zero point (complete absence of the property being measured), e.g. temperature in degrees Kelvin. (NORMAL vs. NON-NORMAL: The Shapiro-Wilk W test can be used to investigate a sample for evidence of \"non-normality\")\n\nORDINAL is a scale for ordering subjects from low to high with any ties attributed to lack of measurement sensitivity e.g. pain score from a questionnaire.\n\nNOMINAL with order is a scale for grouping into categories with order e.g. mild, moderate or severe. This can be difficult to separate from ordinal.\n\nNOMINAL without order is a scale for grouping into unique categories e.g. blood group.\n\nDICHOTOMOUS is a scale like nominal but two categories only e.g. surgery / no surgery."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.74357426,"math_prob":0.88578624,"size":1867,"snap":"2020-10-2020-16","text_gpt3_token_len":476,"char_repetition_ratio":0.19806764,"word_repetition_ratio":0.2007722,"special_character_ratio":0.22281736,"punctuation_ratio":0.08305648,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9852912,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-30T22:32:31Z\",\"WARC-Record-ID\":\"<urn:uuid:0f5370f1-660d-4ca4-a76f-9bb75ffbf33e>\",\"Content-Length\":\"13883\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a11c0e29-f84f-4639-8e40-d68d8f29856e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7db0f799-d068-431e-a576-b1d99241aa25>\",\"WARC-IP-Address\":\"217.146.97.44\",\"WARC-Target-URI\":\"https://www.statsdirect.com/help/methods/relationship_two_variables_measured_on_same_group.htm\",\"WARC-Payload-Digest\":\"sha1:MVYJUQFP6Y7OX2U5V5M3ZCG6P5KFAHIR\",\"WARC-Block-Digest\":\"sha1:2ICWWVJJN4MOF3FNGIWPYVYH4XXFGNVI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370497309.31_warc_CC-MAIN-20200330212722-20200331002722-00322.warc.gz\"}"} |
https://brave-mayer.netlify.app/y-hat-on-ti-84.html | [
"# Y Hat On Ti 84",
null,
"### Ti 84 Pius L Texas Instruments Jojo Bizzare Adventure Jojo Bizarre Jojo S Bizarre Adventure",
null,
"### How To Use Ti83 Or 84 Calculator Ap Statistics Graphing Calculator Calculator",
null,
"### Calculating Residuals Making Residual Plots On Ti 84 Plus Youtube",
null,
"### Texas Instruments Ti 84 Plus Ce Graphing Calculator White Walmart Com Graphing Calculator Calculator Color Graphing",
null,
"Y hat on ti 84. Suppose we are interested in understanding the relationship between the number of hours a student studies for an exam and the exam score they receive. The normal distribution is the most commonly used distributions in all of statistics. This tutorial explains how to use the following functions on a TI-84 calculator to find normal distribution probabilities.\n\nThis item accesses special characters and accent marks used in the language that you chose for localization. How to find the regression equation using a TI-84. They obtain a random sample of 74 cars and find that the mean is 2129 mpg while the standard deviation is 578 mpg.\n\nAnd use to scroll right to the TESTS menu. Press STAT 1 to access the STAT list editor. Each measured data point has an associated residual defined as yŷ the distance of the point above or below the line.\n\nOne Sample t-test on a TI-84 Calculator. Laura Schultz The 1-PropZInt command is used to construct a confidence-interval estimate of a population proportion p or percentage. Use this data to perform a one sample t-test to determine if the true mpg for this type of car.",
null,
"### Linear Regression Ti84 Line Of Best Fit Youtube",
null,
"### Using A Ti 84 To Calculate The Mean And Standard Deviation Of A Data Set Sample Youtube",
null,
"### Using Nderiv With Y Vars On A Ti 84 Graphing Calculator Calculus Graphing Calculator Calculus Calculator",
null,
"### How To Type Math Pi Math On A Ti 84 Plus Ce Calculator Quora",
null,
"### Using The Ti 84 Plus Ce When Working With Complex Numbers Complex Numbers College Algebra Teaching Survival",
null,
"### How To Find Any Character On Ti 84 Plus Math Class Calculator",
null,
"### Ti 83 Plus Graphing Calculator Instructions Neoprene Case Texas Instruments Texasinstruments Graphing Graphing Calculator Numerology",
null,
"### Ti 84 83 82 85 86 Taschenrechner Speicher Zurucksetzen How To Reset Ram Memory Calculator Youtube\n\nSource : pinterest.com"
] | [
null,
"https://i2.wp.com/i.pinimg.com/originals/ad/7d/a6/ad7da62676b4661e8424e92d13a2a092.jpg",
null,
"https://i2.wp.com/i.pinimg.com/originals/f6/4f/a6/f64fa6b1d9362c165b6cef895b7f7db7.jpg",
null,
"https://i2.wp.com/i.ytimg.com/vi/8DcguvYbf7g/maxresdefault.jpg",
null,
"https://i2.wp.com/i.pinimg.com/474x/d3/e2/cf/d3e2cf301afdae89b73c8c81d6df1c19.jpg",
null,
"https://img.wonderhowto.com/img/46/12/63441142383574/0/graph-mario-ti-83-calculator.w1456.jpg",
null,
"https://i2.wp.com/i.ytimg.com/vi/0as2Jh_eDwg/maxresdefault.jpg",
null,
"https://i2.wp.com/i.ytimg.com/vi/gTOVeAf3HA4/maxresdefault.jpg",
null,
"https://i2.wp.com/i.pinimg.com/564x/c5/f7/bc/c5f7bc8639d0183bee0e73b0e5ec30a8.jpg",
null,
"https://i2.wp.com/qph.fs.quoracdn.net/main-qimg-3c94414448c8acdd4ed8f0b847fb963a",
null,
"https://i2.wp.com/i.pinimg.com/originals/2b/04/f2/2b04f2f6476a5b37b5e0768dffc451a4.jpg",
null,
"https://i2.wp.com/mathclasscalculator.com/wp-content/uploads/2018/03/Capture-1.png",
null,
"https://i2.wp.com/i.pinimg.com/originals/53/ea/07/53ea0762d43a457bd7058418d2951ab0.jpg",
null,
"https://i2.wp.com/i.ytimg.com/vi/iiPRtS63P7c/hqdefault.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.74888855,"math_prob":0.84940743,"size":2672,"snap":"2022-05-2022-21","text_gpt3_token_len":605,"char_repetition_ratio":0.104197904,"word_repetition_ratio":0.08456659,"special_character_ratio":0.21407185,"punctuation_ratio":0.042372882,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9853966,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,2,null,9,null,6,null,4,null,2,null,7,null,4,null,4,null,4,null,4,null,4,null,6,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T08:37:04Z\",\"WARC-Record-ID\":\"<urn:uuid:e614314e-a13f-4a02-85e2-d19a3f7cc655>\",\"Content-Length\":\"36518\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:656b9a74-1d82-45e2-9ad8-7432805df4e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:7ee73ca8-8e22-4d82-876d-2d067810eabd>\",\"WARC-IP-Address\":\"34.204.131.44\",\"WARC-Target-URI\":\"https://brave-mayer.netlify.app/y-hat-on-ti-84.html\",\"WARC-Payload-Digest\":\"sha1:VQPE3BXEQYKT3TA2FFN44HAYY6GQNRAM\",\"WARC-Block-Digest\":\"sha1:E5KZUGFYIRDMLCRQ45FFKN2C5S6MYEX2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663013003.96_warc_CC-MAIN-20220528062047-20220528092047-00293.warc.gz\"}"} |
http://slideplayer.com/slide/8262412/ | [
"Presentation is loading. Please wait.",
null,
"# 4.1 The Indefinite Integral. Antiderivative An antiderivative of a function f is a function F such that Ex.An antiderivative of since is.\n\n## Presentation on theme: \"4.1 The Indefinite Integral. Antiderivative An antiderivative of a function f is a function F such that Ex.An antiderivative of since is.\"— Presentation transcript:\n\n4.1 The Indefinite Integral\n\nAntiderivative An antiderivative of a function f is a function F such that Ex.An antiderivative of since is\n\nmeans to find the set of all antiderivatives of f. The expression: read “the indefinite integral of f with respect to x,” Integral sign Integrand Indefinite Integral x is called the variable of integration\n\nEvery antiderivative F of f must be of the form F(x) = G(x) + C, where C is a constant. Notice Constant of Integration Represents every possible antiderivative of 6x.\n\nPower Rule for the Indefinite Integral, Part I Ex.\n\nSum and Difference Rules Ex. Constant Multiple Rule Ex.\n\nPosition, Velocity, and Acceleration Derivative Form If s = s(t) is the position function of an object at time t, then Velocity = v =Acceleration = a = Integral Form\n\nDIFFERENTIAL EQUATIONS A differential equation is an equation that contains a derivative. For example, this is a differential equation. From antidifferentiating skills from last chapter, we can solve this equation for y.\n\nTHE CONCEPT OF THE DIFFERENTIAL EQUATION The dy/dx = f(x) means that f(x) is a rate. To solve a differential equation means to solve for the general solution. By integrating. It is more involved than just integrating. Let’s look at an example:\n\nEXAMPLE 1 GIVEN Multiply both sides by dx to isolate dy. Bring the dx with the x and dy with the y. Since you have the variable of integration attached, you are able to integrate both sides. Note: integral sign without limits means to merely find the antiderivative of that function Notice on the right, there is a C. Constant of integration.\n\nC?? What is that? Remember from chapter 2? The derivative of a constant is 0. But when you integrate, you have to take into account that there is a possible constant involved. Theoretically, a differential equation has infinite solutions. To solve for C, you will receive an initial value problem which will give y(0) value. Then you can plug 0 in for x and the y(0) in for y. Continuing the previous problem, let’s say that y(0)=2.\n\nSolving for c. Continuing the previous problem, let’s say that y(0)=2.\n\nBasic Integration Rules\n\nDownload ppt \"4.1 The Indefinite Integral. Antiderivative An antiderivative of a function f is a function F such that Ex.An antiderivative of since is.\"\n\nSimilar presentations\n\nAds by Google"
] | [
null,
"http://slideplayer.com/static/blue_design/img/slide-loader4.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9056132,"math_prob":0.9919767,"size":2338,"snap":"2021-04-2021-17","text_gpt3_token_len":559,"char_repetition_ratio":0.14481577,"word_repetition_ratio":0.020253165,"special_character_ratio":0.22369547,"punctuation_ratio":0.11853448,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990932,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-12T20:26:30Z\",\"WARC-Record-ID\":\"<urn:uuid:4edacb55-aab9-4a12-be66-c74803e5bcc2>\",\"Content-Length\":\"152827\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72019202-447b-48b3-aaa2-88e6b2956e7d>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac85c6b9-4f98-41a1-8c76-1a44f0832563>\",\"WARC-IP-Address\":\"138.201.54.25\",\"WARC-Target-URI\":\"http://slideplayer.com/slide/8262412/\",\"WARC-Payload-Digest\":\"sha1:33LE4TCMPFDWYCARGS3MHPO4FVJVDPOO\",\"WARC-Block-Digest\":\"sha1:65QXBFGSIHIF5MHUPJV62PGX3BPPVMDH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038069133.25_warc_CC-MAIN-20210412175257-20210412205257-00361.warc.gz\"}"} |
https://www.csauthors.net/ali-r-ansari/ | [
"# Ali R. Ansari\n\nAccording to our database1, Ali R. Ansari authored at least 8 papers between 2000 and 2012.\n\nCollaborative distances:\n• Dijkstra number2 of five.\n• Erdős number3 of five.\n\nBook\nIn proceedings\nArticle\nPhD thesis\nOther\n\n## Bibliography\n\n2012\nA numerical layer resolving method for the flow over a curved surface.\nAppl. Math. Comput., 2012\n\n2011\nAn approximate solution for the static beam problem and nonlinear integro-differential equations.\nComput. Math. Appl., 2011\n\nA semi-analytical iterative technique for solving nonlinear problems.\nComput. Math. Appl., 2011\n\nA new iterative technique for solving nonlinear second order multi-point boundary value problems.\nAppl. Math. Comput., 2011\n\n2010\nSome solutions of the linear and nonlinear Klein-Gordon equations using the optimal homotopy asymptotic method.\nAppl. Math. Comput., 2010\n\n2004\nAn evolutionary approach to Wall Shear Stress prediction in a grafted artery.\nAppl. Soft Comput., 2004\n\n2002\nA Re-examination Of The Cart Centering Problem Using The Chorus System.\nProceedings of the GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference, 2002\n\n2000\nA Parameter Robust Method for a Problem with a Symmetry Boundary Layer.\nProceedings of the Numerical Analysis and Its Applications, 2000"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5989981,"math_prob":0.4606501,"size":1278,"snap":"2020-24-2020-29","text_gpt3_token_len":311,"char_repetition_ratio":0.116954476,"word_repetition_ratio":0.022346368,"special_character_ratio":0.23474178,"punctuation_ratio":0.18534483,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9609109,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-25T20:45:57Z\",\"WARC-Record-ID\":\"<urn:uuid:27d6a781-b932-440c-9923-8c8adf8e273e>\",\"Content-Length\":\"16088\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd6530c4-d486-4f43-bfb2-91fa668ce099>\",\"WARC-Concurrent-To\":\"<urn:uuid:02422274-19fe-498d-84b8-460d786e6068>\",\"WARC-IP-Address\":\"173.212.254.2\",\"WARC-Target-URI\":\"https://www.csauthors.net/ali-r-ansari/\",\"WARC-Payload-Digest\":\"sha1:NQHR3S3MPWDGNG5T5GLBYOVAGPDA5ZD2\",\"WARC-Block-Digest\":\"sha1:7WKKJU2QEWNG3A34NHL4FHLCWSHE7LQ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347389355.2_warc_CC-MAIN-20200525192537-20200525222537-00212.warc.gz\"}"} |
https://codingsight.com/tricky-questions-about-c-sharp/ | [
"Some questions may seem too basic, but they still contain tiny tricks. Sometimes even a simple question may nail to the wall. These questions will be useful to all who study the language.\n\nSo, let’s start!\n\n### 1. What will be the result of execution of the following code?\n\n`````` static String str;\nstatic DateTime time;\n\nstatic void Main(string[] args)\n{\nConsole.WriteLine(str == null ? \"str == null\" : str);\nConsole.WriteLine(time == null ? \"time == null\" : time.ToString());\n}``````\n\nstr == null\n\n1/1/0001 12:00:00 AM\n\nBoth variables are not initialized, but a string is a reference type (to be more specific, it is an immutable type, which means reference type with the semantics of the value type). DateTime is a value type. The default value of the initialized DateTime type is 1/1/0001 12:00:00 AM.\n[/expand]\n\n### 2. Let’s play with inheritance. What will be outputted?\n\n`````` class A\n{\npublic void abc(int q)\n{\nConsole.WriteLine(\"abc from A\");\n}\n}\n\nclass B : A\n{\npublic void abc(double p)\n{\nConsole.WriteLine(\"abc from B\");\n}\n}\n\nstatic void Main(string[] args)\n{\nint i = 5;\nB b = new B();\nb.abc(i);\n}``````\n\nabc from B.\n\n[/expand]\n\n### 3. A similar question. What will be the result?\n\n`````` class P\n{ }\nclass Q : P\n{ }\n\nclass A\n{\npublic void abc(Q q)\n{\nConsole.WriteLine(\"abc from A\");\n}\n}\n\nclass B : A\n{\npublic void abc(P p)\n{\nConsole.WriteLine(\"abc from B\");\n}\n}\n\nstatic void Main(string[] args)\n{\nB b = new B();\nb.abc(new Q());\n}\n``````\n\nabc from B.\n\nHere everything is more obvious than in the previous question.\n\n[/expand]\n\n### 4. A typical polymorphism understanding swindle. The main thing is not to forget and overlook anything. What will be the result of execution of the following code?\n\n`````` class Program\n{\nstatic void Main(string[] args)\n{\nMyClassB b = new MyClassB();\nMyClassA a = b;\na.abc();\n}\n}\n\nclass MyClassA\n{\npublic MyClassA()\n{\nConsole.WriteLine(\"constructor A\");\n}\n\npublic void abc()\n{\nConsole.WriteLine(\"A\");\n}\n}\n\nclass MyClassB:MyClassA\n{\npublic MyClassB()\n{\nConsole.WriteLine(\"constructor B\");\n}\n\npublic void abc()\n{\nConsole.WriteLine(\"B\");\n}\n}``````\n\nconstructor A\nconstructor B\nA\n\nDuring initialization of the B class, the constructor of the A class will be executed by default, then constructor of the B class. After assignment of the b value to a type variable of the A class, we will get an instance of the B class in it. One would think that abc() from the B class should be called, but since there is no specification of any predicate of the abc method in the B class, it hides abc from the A class. The example is not quite correct and abc() in the B class will be underlined, since the new predicate is required.[/expand]\n\n### 5. I have the following class:\n\n`````` public class Point\n{\npublic int X { get; set; }\npublic int Y { get; set; }\npublic Point(int xPos, int yPos)\n{\nX = xPos;\nY = yPos;\n}\n}``````\n\nAlso, I have 3 instances of the class. Will the following initialization of the third instance work? If it won’t, what should be done?\n\n``` Point ptOne = new Point(15, 20);\nPoint ptTwo = new Point(40, 50);\nPoint ptThree = ptOne + ptTwo;```\n\nOf course, the example will not work. To make this code work, we need to add the addition operator overloading to the Point class. For example, the following one:\n\n``` public static Point operator +(Point p1, Point p2)\n{\nreturn new Point(p1.X + p2.X, p1.Y + p2.Y);\n}```\n\n[/expand]\n\n### 6. What will be the execution result of the following query?\n\n`````` string result;\n\nprivate async void btnStart_Click(object sender, RoutedEventArgs e)\n{\nSaySomething();\ntxtSomeTextBlock.Text = result;\n}\n\n{\n\nresult = \"Hello world!\";\nreturn result;\n}``````\n\nAn empty string will be outputted instead of ‘Hello world!’. The SaySomething() task was called without await, and that is why, SaySomethinf is executed synchronously till the first await, i.e. till the first string:\n\n`await System.Threading.Tasks.Task.Delay(1000);`\n\nAfter this, execution is returned to btnStartClick. If we use await during the call of SaySomething(), the result will be expected, – ‘Hello world’ will be outputted.[/expand]\n\n### 7. A must-know question. What will be the execution result of the following query?\n\n`````` delegate void SomeMethod();\n\nstatic void Main(string[] args)\n{\nList<SomeMethod> delList = new List<SomeMethod>();\nfor (int i = 0; i < 10; i++)\n{\n}\n\nforeach (var del in delList)\n{\ndel();\n}\n}``````\n\nThe program will output the number 10 times.\n\nThe delegate was added 10 times. The reference to the i variable was taken. The reference, not the value. That is why, when calling a delegate, the last value of the i variable is taken. It is a typical example of closure.[/expand]\n\n### 8. What will be outputted with the following code?\n\n`````` static bool SomeMethod1()\n{\nConsole.WriteLine(\"Method 1\");\nreturn false;\n}\n\nstatic bool SomeMethod2()\n{\nConsole.WriteLine(\"Method 2\");\nreturn true;\n}\nstatic void Main(string[] args)\n{\nif (SomeMethod1() & SomeMethod2())\n{\nConsole.WriteLine(\"the if block has was executed\");\n}\n}``````\n\nMethod 1\n\nMethod 2\n\nThe if block won’t be executed since SomeMethod1 returns false. But, since the & logical operator is used, the second condition, SomeMethod2, will be also checked. If a more conventional && operator was used, only the value of the first method would be checked.[/expand]\n\n### 9. One more simple question. What will be the output of the following code?\n\n`````` double e = 2.718281828459045;\nobject o = e; // box\nint ee = (int)o;``````\n\nThis code won’t work and will throw an exception in the last string. However, the next casting (in other words, explicit conversion) of the error should not call the error, but just looses fractional part of a number.But during unboxing, the object is checked for the value of the requested type. And the value is copied into the variable only after the first check. And the next one will make unboxing without the error:\n\n``` double e = 2.718281828459045;\nint ee = (int)e;```\n\nBut during unboxing, the object is checked for the value of the requested type. And the value is copied into the variable only after the first check.\n\nAnd the next one will make unboxing without the error:\n\n` int ee = (int)(double)o;`\n\nThe following code will firtsly execute upcast of the o object to the dynamic tye, and then will make casting, not unboxing:\n\n``` int ee = (int)(o as dynamic);\n```\n\nHowever, it is similar to the following code:\n\n` int ee = (int)(o is dynamic ? (dynamic)o : (dynamic)null);`\n\nAs a result, it will be identical to the first example:\n\n` int ee = (int)(dynamic)o;`\n\nAlthough, it seemed to be a new trick.[/expand]\n\n### 10. What will happen after the execution of the following code?\n\n`````` float q = float.MaxValue;\nfloat w = float.MaxValue;\n\nchecked\n{\nfloat a = q * w;\nConsole.WriteLine(a.ToString());\n}``````\n\nInfinity will be outputted.\n\nFloat and double are not integral types. That is why, if we use checked, overflow does not take place. Though, if we used int, byte, short or long, an expected error would occur. Unchecked will not work with not built-in types as well. For example, the following code:\n\n``` decimal x = decimal.MaxValue;\ndecimal y = decimal.MaxValue;\n\nunchecked\n{\ndecimal z = x * y;\nConsole.WriteLine(z.ToString());\n}```\n\nwill trhow System.OverflowException.[/expand]\n\n### 11. Let’s play with the decimal type. Will something be outputted? If it will, what will it be?\n\n`````` int x = 5;\ndecimal y = x / 12;\nConsole.WriteLine(y.ToString());``````\n\nThis example will output 0 because x has not been reduced to the decimal type. Since x is of the integer type, when 5 is divided by 12, the result is a number less than 1, that is, an integral zero. The correct result can be obtained with the following line:\n\n` decimal y = (decimal)x / 12;`\n\n[/expand]\n\n### 12. What is the output of the following code?\n\n`````` double d=5.15;\nd = d / 0;\n\nfloat f = 5.15f;\nf = f / 0;\n\ndecimal dc = 5.12m;\ndc = dc / 0;``````\n\nDivision of the double and float types by 0 will return infinity, while decimal will throw System.DivideByZeroException.\n\nJust in case, a few words about about the diffs between decimal and float:\n\n• decimal (128-bit data type with precision of 28-29 digits) is used for finacial calculations which require high precision and no errors in rounding;\n• double ( a 54-bit type with precision accuracy to 15-16 digits) is a basic type for storing floating-point values. Used in most cases (except financial);\n• float (32-bit data type with precision of 7 digits) – a type with the lowest precision and range, but with the highest performance. Rounding errors are possible. Used for heavy-loaded calculations.[/expand]\n\n### 13. Consider, we have the following method:\n\n``````int SomeMethod(int x, int y)\n{\nreturn (x - y) * (x + y);\n}``````\n\nCan we call it in the following way?\n\n``SomeMethod(y:17, x:21)``\n\nYes, we can. We can also call it in a different way:\n\n`SomeMethod(11, y:27)`\n\nBut we can’t call it in the following way:\n\n`SomeMethod(x:12, 11)`\n\nUPDATE: As from C# 7.2, making such call might become impossible.[/expand]\n\n### 14. What is the output of the following code?\n\n`````` static void Main(string[] args)\n{\nint someInt;\nSomeMethod2(out someInt);\nConsole.WriteLine(someInt);\n\nSomeMethod1(ref someInt);\nConsole.WriteLine(someInt);\n\nSomeMethod(someInt);\nConsole.WriteLine(someInt);\n\n}\n\nstatic void SomeMethod(int value)\n{\nvalue = 0;\n}\nstatic void SomeMethod1(ref int value)\n{\nvalue = 1;\n}\nstatic void SomeMethod2(out int value)\n{\nvalue = 2;\n}\n``````\n\nNothing bad will happen. The following will be outputted:\n\n2\n1\n1\n\nSince we call SomeMethod2 with the out keyword first, someInt may be passed without initialization. If we used SomeMethod or SomeMethod1, the compilation error would occur.\n\nSince SomeMethod does not contain the ref or out keyword in the parameter, the value is passed by value, not by reference, which means that someInt will not be changed.\n\nThe ref and out keywords mean that the values are passed by reference. But in the second case, a value must be assigned to the parameter in the method. In our example, in the SomeMethod2 method, a value must be assigned to the value parameter.[/expand]\n\n### 15. Will the following code work?\n\n`````` static void Main(string[] args)\n{\ngoto lalala;\nint i = 5;\n{ Console.WriteLine(i); }\nlalala:\nConsole.WriteLine(\"Bye, cruel world! (=\");\n}``````\n\nYes, but it looks strange. Only Bye, cruel world! will be outputted. Inside the method, we can declare inclusive local region between curly braces. Variables from this region will be unavailable outside of it. That is, the following code will not Fbe compiled:\n\n``` static void Main(string[] args)\n{\n{ int i = 10; }\nConsole.WriteLine(i);\n}```\n\nStrange enough, goto is still not supported in C#. Though, it is not required much.[/expand]\n\n### 16. What will be outputted?\n\n``````string hello = \"hello\";\nstring helloWorld = \"hello world\";\nstring helloWorld2 = \"hello world\";\nstring helloWorld3 = hello + \" world\";\n\nConsole.WriteLine(helloWorld == helloWorld2);\nConsole.WriteLine(object.ReferenceEquals(helloWorld, helloWorld2));\nConsole.WriteLine(object.ReferenceEquals(helloWorld, helloWorld3));\n\nTrue, True, False will be outputted.\n\nIt is a typical example of string interning. Situations when strings storing the same value represent one object in memory. This mechanizm allows using memory more sparingly.[/expand]\n\n### 17. Can pointers be used in C# as in C++?\n\nYes, they can be used inside the method declared with the unsafe modifier or within the unsafe block.\n\n```unsafe {\nint a = 15;\n*b = &a;\nConsole.WriteLine(*b);\n}```\n\nDo not forget to specify ‘Allow unsafe code’ in the project properties.[/expand]"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7308623,"math_prob":0.9346655,"size":12324,"snap":"2023-40-2023-50","text_gpt3_token_len":3064,"char_repetition_ratio":0.15275975,"word_repetition_ratio":0.094883256,"special_character_ratio":0.27061018,"punctuation_ratio":0.17757009,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9801592,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T16:17:10Z\",\"WARC-Record-ID\":\"<urn:uuid:c06eb5f0-cee4-4e7f-bdfe-5c2f8200ce0e>\",\"Content-Length\":\"129654\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6928be7f-d01f-4af9-9a49-799d5c64f474>\",\"WARC-Concurrent-To\":\"<urn:uuid:81b038c1-6f62-40f9-8e62-2a3cd7fdf10b>\",\"WARC-IP-Address\":\"104.21.31.75\",\"WARC-Target-URI\":\"https://codingsight.com/tricky-questions-about-c-sharp/\",\"WARC-Payload-Digest\":\"sha1:RADNSO6AFSTXUKE2PDFOAUMR75BIHY5T\",\"WARC-Block-Digest\":\"sha1:MJZGJJ3ZDSPQ5NWJOWGIIB6H3FCCKFAS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510697.51_warc_CC-MAIN-20230930145921-20230930175921-00405.warc.gz\"}"} |
https://www.rizinia.com/a-new-method-for-initial-position-identification-of-interior-permanent-magnet-synchronous-machine.html | [
"China permanent magnet manufacturer: www.rizinia.com\n\n# A new method for initial position identification of Interior Permanent Magnet Synchronous Machine\n\nBuilt in permanent magnet synchronous machine (IPMSM) has the characteristics of high efficiency and high power density. It is widely used in household appliances, electric vehicles, new energy power generation, aerospace and other fields. The sensorless control of permanent magnet synchronous motor starts when the initial position of rotor is unknown, which may lead to large starting current, unable to start, and even reverse rotation. Therefore, accurate identification of rotor initial position is very important to improve the sensorless control performance of permanent magnet synchronous motor.\nAt present, the common methods for identifying the initial position of motor rotor mainly include voltage pulse injection method and high-frequency signal injection method. The voltage pulse injection method injects a series of positive and negative symmetrical voltage pulse signals into the straight axis to obtain the initial position of the rotor by using the peak value of the current response. However, the voltage pulse signal needs to be injected many times. As the voltage vector direction approaches the real position of the rotor, the peak value difference of the current response decreases and the signal-to-noise ratio of the identified position decreases.\nHigh frequency signal injection method includes high frequency current signal injection method and high frequency voltage signal injection method. The high-frequency current signal injection method is to inject the high-frequency current signal and extract the high-frequency response voltage to identify the rotor position. The performance of this method is greatly affected by the PI parameters of the current loop. High frequency voltage injection methods mainly include high frequency rotating sinusoidal voltage injection method, high frequency pulse voltage injection method and high frequency square wave voltage injection method.\nBoth high frequency rotating sinusoidal voltage injection method and high frequency pulse voltage injection method are high frequency sinusoidal signal injection. A filter needs to be used to obtain the high frequency response current signal, and then use the signal to identify the rotor position. The use of the filter reduces the dynamic performance of the system.\nTo solve this problem, a high frequency square wave voltage injection method is proposed. This method injects high-frequency square wave voltage into the direct axis and extracts the high-frequency response current on the quadrature axis to identify the rotor position. By injecting high-frequency square wave signal, the amplitude of high-frequency response current can be obtained directly by making a difference between adjacent sampling currents. Therefore, there is no need to use a filter to extract the rotor position.\nHowever, because the input signal of the observer in this method is the quadrature axis high-frequency response current amplitude, this signal is a sinusoidal function of the position error signal, and there are multiple zeros, the convergence time of the closed-loop regulation of the rotor pole position signal is long; At the same time, this method is based on the convex machine effect of motor, so it can not identify the magnetic polarity; In addition, the quadrature axis high frequency response current signal of this method is related to the motor inductance parameters. The inductance parameters need to be used to normalize the signal to ensure the universality of the observer parameter design.\n\nTo solve the above problems, researchers from the school of electrical and information engineering of Hunan University proposed a rotor initial position identification method based on high-frequency orthogonal square wave voltage injection.",
null,
"Fig.1 Schematic diagram of rotor initial position identification by high frequency orthogonal square wave voltage injection",
null,
"Figure.2 IPMSM experimental platform\n\nFirstly, the orthogonal high-frequency square wave voltage signal is injected into the static coordinate axis, and the magnetic pole position is identified by the high-frequency response current signal in the static coordinate axis system; Then, while injecting the high-frequency orthogonal square wave voltage signal, inject a low-frequency sinusoidal current signal into the direct axis, the saturation degree of the motor and the inductance of the AC-DC axis will change, resulting in the change of the high-frequency response current. The magnetic polarity (N pole and S pole) is identified by comparing the amplitude of the high-frequency response current near the positive and negative peaks of the low-frequency sinusoidal current.\nThis method can directly obtain the electrical angle of the rotor magnetic pole position by calculating the arctangent, and does not need to obtain the magnetic pole position signal through closed-loop adjustment. It has fast response speed and easy engineering implementation. In order to make the parameter design of speed observer universal, researchers take the signal obtained by arctangent as the input signal of speed observer, which avoids the dependence of speed observer parameter design on inductance parameters. Finally, the effectiveness and accuracy of the method are verified on the experimental platform of 1.5k wipmsm drive system, and the following conclusions are drawn:\n\n• 1) This method can accurately identify the initial position of the rotor. When the rotor is in different positions, the electrical angle error of the identified initial position of the rotor is less than 4 °; The duration of the whole process of rotor initial position identification can be less than 30ms.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
""
] | [
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20500%20349'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20500%20334'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2024%2024'%3E%3C/svg%3E",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8891251,"math_prob":0.94667846,"size":6199,"snap":"2021-31-2021-39","text_gpt3_token_len":1093,"char_repetition_ratio":0.19838579,"word_repetition_ratio":0.034782607,"special_character_ratio":0.16922083,"punctuation_ratio":0.0754902,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9652547,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-27T10:23:03Z\",\"WARC-Record-ID\":\"<urn:uuid:c678a30b-a68e-4a07-a156-00e2b3f85000>\",\"Content-Length\":\"205934\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce1750d8-95e1-48d2-b789-81efc2883801>\",\"WARC-Concurrent-To\":\"<urn:uuid:8adcb074-1930-43e3-bb1c-0034fe543e6f>\",\"WARC-IP-Address\":\"103.127.81.249\",\"WARC-Target-URI\":\"https://www.rizinia.com/a-new-method-for-initial-position-identification-of-interior-permanent-magnet-synchronous-machine.html\",\"WARC-Payload-Digest\":\"sha1:Z3SXVJ4UWAIZ4FUUJ3MMLGYK23UA5RTE\",\"WARC-Block-Digest\":\"sha1:WQFAX6ZVYXSTGMYEIIQ7UPROP74BRP4A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058415.93_warc_CC-MAIN-20210927090448-20210927120448-00299.warc.gz\"}"} |
https://www.khanacademy.org/math/multivariable-calculus/thinking-about-multivariable-function/ways-to-represent-multivariable-functions/a/multivariable-functions | [
"If you're seeing this message, it means we're having trouble loading external resources on our website.\n\nIf you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.\n\n## Multivariable calculus\n\n### Course: Multivariable calculus>Unit 1\n\nLesson 6: Visualizing multivariable functions (articles)\n\n# What are multivariable functions?\n\nAn overview of multivariable functions, with a sneak preview of what applying calculus to such functions looks like\n\n## What we're building to\n\n• A function is called multivariable if its input is made up of multiple numbers.\n$\\phantom{\\rule{1em}{0ex}}f\\left(\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\underset{\\begin{array}{c}\\text{Multiple numbers}\\\\ \\text{in the input}\\end{array}}{\\underset{⏟}{x,y}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\phantom{\\rule{-0.167em}{0ex}}\\right)={x}^{2}y$\n• If the output of a function consists of multiple numbers, it can also be called multivariable, but these ones are also commonly called vector-valued functions.\n$f\\left(x\\right)=\\left[\\begin{array}{c}\\mathrm{cos}\\left(x\\right)\\\\ \\mathrm{sin}\\left(x\\right)\\end{array}\\right]\\phantom{\\rule{1em}{0ex}}←\\text{Multiple numbers in output}$\n• Visualizing these functions is all about thinking of space with multiple dimensions (typically just two or three if we don't want our brains to explode).\n\n## What are multivariable functions?\n\nWhen I first learned about functions, and maybe this is true for you too, I remember always thinking about them as taking in a number and outputting a number. A typical example would be something like this:\n$f\\left(x\\right)={x}^{2}$\nOr this:\n$f\\left(x\\right)=\\mathrm{sin}\\left(x\\right)+2\\sqrt{x}$.\nAnd if you think back to the first time you learned about functions, you might have been taught to imagine the function as a machine which sucks in some input, somehow manipulates it, then spits out an output.\nBut really, functions don't just have to take in and spit out numbers, they can take in any thing and spit out any thing. In multivariable calculus, that thing can be a list of numbers. That is to say, the input and/or output can consists of multiple numbers.\n\nExample of different types of functions\nSingle-number inputMultiple-number inputs\nSingle-number output$f\\left(x\\right)={x}^{2}$$f\\left(x,y\\right)={x}^{2}+{y}^{3}$\nMultiple-number output$f\\left(t\\right)=\\left(\\mathrm{cos}\\left(t\\right),\\mathrm{sin}\\left(t\\right)\\right)$$f\\left(u,v\\right)=\\left({u}^{2}-v,{v}^{2}+u\\right)$\n\nA multivariable function is just a function whose input and/or output is made up of multiple numbers. In contrast, a function with single-number inputs and a single-number outputs is called a single-variable function.\nNote: Some authors and teachers use the word multivariable for functions with multiple-number inputs, not outputs.\n\n## Lists of numbers $↔$ points in space\n\nWhat makes multivariable calculus beautiful is that visualizing functions, along with all the new calculus you will learn to manipulate them, involves space with multiple dimensions.\nFor example, say the input of some function you are dealing is a pair of numbers, like $\\left(2,5\\right)$. You could think about this as two separate things: the number two and the number five.\nHowever, it's more common to represent a pair like $\\left(2,5\\right)$ as a single point in two-dimensional space, with $x$-coordinate $2$ and $y$-coordinate $5$.\nSimilarly, it's fun to think about a triplet of numbers like $\\left(3,1,2\\right)$ not as three separate things, but as a single point in three-dimensional space.\nSo multivariable functions are all about associating points in one space with points in another space. For example, a function like $f\\left(x,y\\right)={x}^{2}y$, which has a two-variable input and a single-variable output, associates points in the $xy$-plane with points on the number line. A function like $f\\left(x,y,z\\right)=\\left(yz,xz,xy\\right)$ associates points in three-dimensional space with other points in three-dimensional space.\nIn the next few articles, I'll go over various methods you can use to visualize these functions. These visualizations can be beautiful and often extremely helpful for understanding why a formula looks the way it does. However, it can also be mind-bendingly confusing at times, especially if the number of dimensions involved is greater than three.\nI think it is comforting to sit back and realize that at the end of the day, it's all just numbers. Maybe it's a pair of numbers turning into a triplet, or maybe it's one hundred numbers turning into one hundred thousand, but ultimately any task that you perform—or that a computer performs—is done one number at a time.\n\n## Vector-valued functions\n\nSometimes a list of numbers, like $\\left(2,5\\right)$, is not thought about as a point in space, but as a vector. That is to say, an arrow which involves moving $2$ to the right and $5$ up as you go from its tail to its tip.\nTo emphasize the conceptual difference, it's common to use a different notation, either writing the numbers vertically, $\\left[\\begin{array}{c}2\\\\ 5\\end{array}\\right]$, or letting the symbol $\\stackrel{^}{\\mathbf{\\text{i}}}$ represent the $x$-component while $\\stackrel{^}{\\mathbf{\\text{j}}}$ represents the $y$-component: $2\\stackrel{^}{\\mathbf{\\text{i}}}+5\\stackrel{^}{\\mathbf{\\text{j}}}$.\nThis is, of course, only a conceptual difference. A list of numbers is a list of numbers no matter whether you choose to represent it with an arrow or a point. Depending on the context, though, it can feel more natural to think about vectors. Velocity and force, for instance, are almost always represented as vectors, since this gives the strong visual of movement, or of pushing and pulling.\nFor whatever reason, when it comes to multivariable functions, it is more commonly the output that you think of as a vector, while you think about the input as a point. This is not a rule, it just happens to play out that way I guess.\n\n#### Terminology\n\nFunctions whose output is a vector are called vector-valued functions, while functions with a single number as their output are called either scalar-valued, as is common in engineering, or real-valued, as is common in pure math (real as in real number).\n\n## Examples of multivariable functions\n\nThe more you try to model the real world, the more you realize just how constraining single-variable calculus can be. Here are just a few examples of where multivariable functions arise.\n\n## Example 1: From location to temperature\n\nTo model varying temperatures in a large region, you could use a function which takes in two variables—longitude and latitude, maybe even altitude as a third—and outputs one variable, the temperature. Written down, here's how that might look:\n$T=f\\left({L}_{1},{L}_{2}\\right)$\n• $T$ is temperature.\n• ${L}_{1}$ is longitude.\n• ${L}_{2}$ is latitude.\n• $f$ is some complicated function that determines which temperature each longitude-latitude pair corresponds with.\nAlternatively, you could say that the temperature $T$ is a function of longitude ${L}_{1}$ and latitude ${L}_{2}$ and write it as $T\\left({L}_{1},{L}_{2}\\right)$.\n\n## Example 2: From time to location\n\nTo model how a particle moves through space over time, you could use a function which takes in one number—the time—and outputs the coordinates of the particle, perhaps two or three numbers depending on the dimension you are modeling.\nThere are a couple different ways this could be written down:\n$\\stackrel{\\to }{\\mathbf{\\text{s}}}=f\\left(t\\right)$\n• $\\stackrel{\\to }{\\mathbf{\\text{s}}}$ is a two or three dimensional \"displacement vector\", indicating the position of the particle.\n• $t$ is time.\n• $f$ is a vector-valued function.\nAlternatively, you might break down components of the vector-valued function into separate scalar-valued functions $x\\left(t\\right)$ and $y\\left(t\\right)$, which indicate the coordinates of x and y as functions of time:\n\n## Example 3: From user data to prediction\n\nWhen a website tries to predict a user's behavior, it might create a function that takes in thousands of variables, including the user's age, the coordinates of their location, the number of times they've clicked on links of a certain type, etc. The output might also include multiple variables, such as the probability they will click on a different link or the probability they purchase a different item.\n\n## Example 4: From position to a velocity vector\n\nIf you are modeling the flow of a fluid, one approach is to express the velocity of each individual particle in the fluid. To do this, imagine a function which takes as its input the coordinates of a particle, and which outputs the velocity vector of that particle.\nAgain, there are several ways this might look written down:\n$\\stackrel{\\to }{\\mathbf{\\text{v}}}=f\\left(x,y\\right)$\n• $\\stackrel{\\to }{\\mathbf{\\text{v}}}$ is a two-dimensional velocity vector.\n• $x$ and $y$ are position coordinates.\n• $f$ is a multivariable vector-valued function.\nAlternatively, you could break up the components of the vector-valued function $f$ and use $\\stackrel{^}{\\mathbf{\\text{i}}}$, $\\stackrel{^}{\\mathbf{\\text{j}}}$ notation:\n$\\stackrel{\\to }{\\mathbf{\\text{v}}}=g\\left(x,y\\right)\\stackrel{^}{\\mathbf{\\text{i}}}+h\\left(x,y\\right)\\stackrel{^}{\\mathbf{\\text{j}}}$\n• $\\stackrel{\\to }{\\mathbf{\\text{v}}}$ is a two-dimensional velocity vector.\n• $\\stackrel{^}{\\mathbf{\\text{i}}}$ is the unit vector in the $x$-direction.\n• $\\stackrel{^}{\\mathbf{\\text{j}}}$ is the unit vector in the $y$-direction.\n• $g$ is a scalar-valued function indicating the $x$ component of each vector as a function of position.\n• $h$ is a scalar-valued function indicating the $y$ component of each vector as a function of position.\n\n## Where calculus fits in\n\nThere are two fundamental topics in calculus:\n• Derivatives, which study the rate of change of a function as you tweak its input.\n• Integrals, which study how to add together infinitely many infinitesimal quantities that make up a function's output.\nMultivariable calculus extends these ideas to functions with higher-dimensional inputs and/or outputs.\nWith respect to the examples above, rates of change could refer to the following:\n• How temperature changes as you move in a some direction.\n• The amount an online shopper's behavior changes as some aspect of the site changes.\n• The fluctuations in flow rate across space.\nOn the other hand, \"add together infinitely many infinitesimal quantities\" might mean\n• Finding the average temperature.\n• Computing the total work done on a particle by some external force while it moves.\n• Describing the net velocity of an entire region of some flowing liquid.\nWhat makes these cases fundamentally different from single variable calculus is that we will need to describe changes in different directions, as well as how those changes relate to each other. You'll see what I mean in coming topics.\nConcept Check: In Example 2 above, where the location of a particle is described as a function of time, what would be an example of a rate of change we might be interested in?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9336185,"math_prob":0.99661255,"size":13051,"snap":"2023-40-2023-50","text_gpt3_token_len":2787,"char_repetition_ratio":0.14785008,"word_repetition_ratio":0.026363635,"special_character_ratio":0.20841315,"punctuation_ratio":0.105325915,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9988185,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T01:19:52Z\",\"WARC-Record-ID\":\"<urn:uuid:b46e73ce-5971-401e-bb6e-e8d05bedec83>\",\"Content-Length\":\"798839\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e2210b13-2e68-4b09-b428-7845355b1350>\",\"WARC-Concurrent-To\":\"<urn:uuid:bdba4596-7237-48f5-a392-a463577383cb>\",\"WARC-IP-Address\":\"146.75.29.42\",\"WARC-Target-URI\":\"https://www.khanacademy.org/math/multivariable-calculus/thinking-about-multivariable-function/ways-to-represent-multivariable-functions/a/multivariable-functions\",\"WARC-Payload-Digest\":\"sha1:KS5O76FU3Z3VE7NZZOC4DJC543AEIW6C\",\"WARC-Block-Digest\":\"sha1:7G7CIBSEZGJNKAIGBXRG4ZJTW3CWRLMT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100258.29_warc_CC-MAIN-20231130225634-20231201015634-00213.warc.gz\"}"} |
https://de.mathworks.com/help/matlab/math/multiple-streams.html | [
"# Multiple Streams\n\n### Using Multiple Independent Streams\n\nMATLAB® software includes generator algorithms that enable you to create multiple independent random number streams. For example, the four generator types that support multiple independent streams are the Combined Multiple Recursive (`'mrg32k3a'`), the Multiplicative Lagged Fibonacci (`'mlfg6331_64'`), the Philox 4x32 ('`philox4x32_10`'), and the Threefry 4x64 ('`threefry4x64_20`') generators. You can create multiple independent streams that are guaranteed to not overlap, and for which tests that demonstrate (pseudo)independence of the values between streams have been carried out. For more information about generator algorithms that support multiple streams, see the table of generator algorithms in Choosing a Random Number Generator.\n\nThe `RandStream.create` function enables you to create streams that have the same generator algorithm and seed value, but are statistically independent.\n\n`[s1,s2,s3] = RandStream.create('mlfg6331_64','NumStreams',3)`\n```s1 = mlfg6331_64 random stream StreamIndex: 1 NumStreams: 3 Seed: 0 NormalTransform: Ziggurat ```\n```s2 = mlfg6331_64 random stream StreamIndex: 2 NumStreams: 3 Seed: 0 NormalTransform: Ziggurat ```\n```s3 = mlfg6331_64 random stream StreamIndex: 3 NumStreams: 3 Seed: 0 NormalTransform: Ziggurat ```\n\nAs evidence of independence, you can see that these streams are largely uncorrelated.\n\n```r1 = rand(s1,100000,1); r2 = rand(s2,100000,1); r3 = rand(s3,100000,1); corrcoef([r1,r2,r3])```\n```ans = 3×3 1.0000 0.0007 0.0052 0.0007 1.0000 0.0000 0.0052 0.0000 1.0000 ```\n\nDepending on the application, creating only some of the streams in a set of independent streams can be useful if you need to simulate some events. Specify the `StreamIndices` parameter to create only some of the streams from a set of multiple streams. The `StreamIndex` property returns the index of each stream you create.\n\n```numLabs = 256; labIndex = 4; s4 = RandStream.create('mlfg6331_64','NumStreams',numLabs,'StreamIndices',labIndex)```\n```s4 = mlfg6331_64 random stream StreamIndex: 4 NumStreams: 256 Seed: 0 NormalTransform: Ziggurat ```\n\nMultiple streams, since they are statistically independent, can be used to verify the precision of a simulation. For example, a set of independent streams can be used to repeat a Monte Carlo simulation several times in different MATLAB sessions or on different processors and determine the variance in the results. This makes multiple streams useful in large-scale parallel simulations.\n\n### Using Seeds to Get Different Results\n\nFor generator types that do not explicitly support independent streams, different seeds provide a method to create multiple streams. By using different seeds, you can create streams that return different values and act separately from one another. However, using a generator specifically designed for multiple independent streams is a better option, as the statistical properties across streams have been carefully verified.\n\nCreate two streams with different seeds by using the Mersenne twister generator.\n\n`s1 = RandStream('mt19937ar','Seed',1)`\n```s1 = mt19937ar random stream Seed: 1 NormalTransform: Ziggurat ```\n`s2 = RandStream('mt19937ar','Seed',2)`\n```s2 = mt19937ar random stream Seed: 2 NormalTransform: Ziggurat ```\n\nUse the first stream in one MATLAB session to generate random numbers.\n\n`r1 = rand(s1,100000,1);`\n\nUse the second stream in another MATLAB session to generate random numbers.\n\n`r2 = rand(s2,100000,1);`\n\nWith different seeds, streams typically return values that are uncorrelated.\n\n`corrcoef([r1,r2])`\n```ans = 2×2 1.0000 0.0030 0.0030 1.0000 ```\n\nThe two streams with different seeds may appear uncorrelated since the state space of the Mersenne Twister is so much larger (${2}^{19937}$ elements) than the number of possible seeds (${2}^{32}$). The chances of overlap in different simulation runs are pretty remote unless you use a large number of different seeds. Using widely spaced seeds does not increase the level of randomness. In fact, taking this strategy to the extreme and reseeding a generator before each call can result in the sequence of values that are not statistically independent and identically distributed.\n\nSeeding a stream is most useful if you use it as an initialization step, perhaps at MATLAB startup, or before running a simulation.\n\n### Using Substreams to Get Different Results\n\nAnother method to get different results from a stream is to use substreams. Unlike seeds, where the locations along the sequence of random numbers are not exactly known, the spacing between substreams is known, so any chance of overlap can be eliminated. Like independent parallel streams, research has been done to demonstrate statistical independence across substreams. In short, substreams are a more controlled way to do many of the same things that seeds have traditionally been used for, and a more lightweight solution than parallel streams.\n\nSubstreams provide a quick and easy way to ensure that you get different results from the same code at different times. For example, generate several random numbers in a loop.\n\n```defaultStream = RandStream('mlfg6331_64'); RandStream.setGlobalStream(defaultStream) for i = 1:5 defaultStream.Substream = i; z = rand(1,i) end```\n```z = 0.6986 ```\n```z = 1×2 0.9230 0.2489 ```\n```z = 1×3 0.0261 0.2530 0.0737 ```\n```z = 1×4 0.3220 0.7405 0.1983 0.1052 ```\n```z = 1×5 0.2067 0.2417 0.9777 0.5970 0.4187 ```\n\nIn another loop, you can generate random values that are independent from the first set of 5 iterations.\n\n```for i = 6:10 defaultStream.Substream = i; z = rand(1,11-i) end```\n```z = 1×5 0.2650 0.8229 0.2479 0.0247 0.4581 ```\n```z = 1×4 0.3963 0.7445 0.7734 0.9113 ```\n```z = 1×3 0.2758 0.3662 0.7979 ```\n```z = 1×2 0.6814 0.5150 ```\n```z = 0.5247 ```\n\nEach of these substreams can reproduce its loop iteration. For example, you can return to the 6th substream in the loop.\n\n```defaultStream.Substream = 6; z = rand(1,5)```\n```z = 1×5 0.2650 0.8229 0.2479 0.0247 0.4581 ```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83060896,"math_prob":0.98254824,"size":5652,"snap":"2020-34-2020-40","text_gpt3_token_len":1512,"char_repetition_ratio":0.15350567,"word_repetition_ratio":0.042755343,"special_character_ratio":0.28556263,"punctuation_ratio":0.16240875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9769885,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T03:06:30Z\",\"WARC-Record-ID\":\"<urn:uuid:11e26c91-d51f-42c3-81ed-722831323c5d>\",\"Content-Length\":\"77031\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75501794-6a05-40b9-964f-feb05d8e7e34>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa52ab26-cc6d-48aa-a8a8-8533b5073333>\",\"WARC-IP-Address\":\"23.32.68.178\",\"WARC-Target-URI\":\"https://de.mathworks.com/help/matlab/math/multiple-streams.html\",\"WARC-Payload-Digest\":\"sha1:5M5K7SYQB7TFPXQSX3FD6YAOZSGWFLGD\",\"WARC-Block-Digest\":\"sha1:XG2ACB75VJF77PPGS6RROFUD2ESNTIDY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400202686.56_warc_CC-MAIN-20200922000730-20200922030730-00783.warc.gz\"}"} |
https://web2.0calc.com/questions/algebra_37028 | [
"+0\n\n# Algebra\n\n0\n114\n1\n\nThe non-negative real numbers x ,y and z satisfy the two equations\n\nx^2 + y^2 + z^2 = 9\n\nxy + yz + zx = 6\n\nWhat is the sum of x, y, and z?\n\nApr 29, 2021"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.72672695,"math_prob":1.0000012,"size":158,"snap":"2021-43-2021-49","text_gpt3_token_len":65,"char_repetition_ratio":0.051948052,"word_repetition_ratio":0.0,"special_character_ratio":0.43670887,"punctuation_ratio":0.11363637,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994709,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T02:18:17Z\",\"WARC-Record-ID\":\"<urn:uuid:931e5a48-032e-4adb-9eb5-b866183cba81>\",\"Content-Length\":\"19968\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e4677fc2-773b-4863-8e0e-21ddbfc0339d>\",\"WARC-Concurrent-To\":\"<urn:uuid:3201095b-adb8-4128-a359-47b2b5f20504>\",\"WARC-IP-Address\":\"168.119.68.208\",\"WARC-Target-URI\":\"https://web2.0calc.com/questions/algebra_37028\",\"WARC-Payload-Digest\":\"sha1:MBGXEGO4ZJI4FEWSNBHSP3P2Q6ZJUQNF\",\"WARC-Block-Digest\":\"sha1:YW5TRTDOND7MBVUPWB72OED7CJ6BCUWM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363332.1_warc_CC-MAIN-20211207014802-20211207044802-00450.warc.gz\"}"} |
http://jas.shahroodut.ac.ir/article_2054.html | [
"Document Type : Original Manuscript\n\nAuthors\n\nFaculty of Mathematical Sciences, Shahrood University of Technology, P.O. Box: 316-3619995161, Shahrood, Iran.\n\nAbstract\n\nA subset of the vertex set of a graph $G$ is called a zero forcing set if by considering them colored and, as far as possible, a colored vertex with exactly one non-colored neighbor forces its non-colored neighbor to get colored, then the whole vertices of $G$ become colored. The total forcing number of a graph $G$, denoted by $F_t(G)$, is the cardinality of a smallest zero forcing set of $G$ which induces a subgraph with no isolated vertex. The connected forcing number, denoted by $F_c(G)$, is the cardinality of a smallest zero forcing set of $G$ which induces a connected subgraph. In this paper, we first characterize the graphs with $F_t(G)=2$ and, as a corollary, we characterize the graphs with $F_c(G)=2$.\n\nKeywords",
null,
"20.1001.1.23455128.2021.9.1.6.2\nReferences\n1. A. Aazami, Hardness results and approximation algorithms for some problems on graphs, PhD thesis, University of Waterloo, 2008.\n\n2. F. Barioli, W. Barrett, S. M. Fallat, H. T. Hall, L. Hogben, B. Shader, P. van den Driessche, and H. Van Der Holst, Parameters related to tree-width, zero forcing, and maximum nullity of a graph, J. Graph Theory, 72(2) (2013), 146–177.\n\n3. B. Brimkov and R. Davila, Characterizations of the connected forcing number of a graph, arXiv preprint arXiv:1604.00740, (2016).\n\n4. B. Brimkov, C. C. Fast, and I.V. Hicks, Graphs with extremal connected forcing numbers, arXiv preprint arXiv:1701.08500, (2017).\n\n5. B. Brimkov and I.V Hicks, Complexity and computation of connected zero forcing, Discrete Appl. Math., 229 (2017), 31–45.\n\n6. D. Burgarth and V. Giovannetti, Full control by locally induced relaxation, Phys. Rev. Lett., 99(10) (2017), 100501.\n\n7. D. Burgarth, V. Giovannetti, L. Hogben, S. Severini, and M. Young, Logic circuits from zero forcing, Nat. Comput., 14(3) (2015), 485–490.\n\n8. R. Davila and M. Henning, Total forcing sets and zero forcing sets in trees, Discuss. Math. Graph Theory, 40(3) (2020), 733–754.\n\n9. R. Davila and M. A. Henning, On the total forcing number of a graph, Discrete Appl. Math., 257 (2019),115–127.\n\n10. R. Davila, M. A. Henning, C. Magnant, and R. Pepper, Bounds on the connected forcing number of a graph, Graphs Combin., 34(6) (2018), 1159–1174.\n\n11. R. Davila, Bounding the forcing number of a graph, PhD thesis, Rice University, (2015).\n\n12. T. W. Haynes, S.M Hedetniemi, S.T Hedetniemi, and M.A. Henning, Domination in graphs applied to electric power networks, SIAM J. Discrete Math., 15(4) (2002), 519–529.\n\n13. D. D. Row, A technique for computing the zero forcing number of a graph with a cut-vertex, Linear Algebra Appl., 436(12) (2012), 4423–4432.\n\n14. M. M. Syslo, Characterizations of outerplanar graphs, Discrete Math., 26(1)\n(1979), 47–53.\n\n15. Work, A. M. R. S. G., Zero forcing sets and the minimum rank of graphs, Linear Algebra Appl., 428(7) (2008), 1628–1648."
] | [
null,
"http://jas.shahroodut.ac.ir/images/dor.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7945036,"math_prob":0.9463275,"size":2801,"snap":"2022-05-2022-21","text_gpt3_token_len":875,"char_repetition_ratio":0.13657491,"word_repetition_ratio":0.07191011,"special_character_ratio":0.3413067,"punctuation_ratio":0.29130435,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9956504,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T13:47:51Z\",\"WARC-Record-ID\":\"<urn:uuid:5602fc8d-d013-4c89-9f06-c18f41e97a19>\",\"Content-Length\":\"57011\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c9bd700-ac75-4985-b8bb-d0bbfb2d6c10>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd6a5040-f75f-4825-a63a-91ce779afa26>\",\"WARC-IP-Address\":\"85.185.67.213\",\"WARC-Target-URI\":\"http://jas.shahroodut.ac.ir/article_2054.html\",\"WARC-Payload-Digest\":\"sha1:RUXISXFQVVXC2KMEKDKEHGDHZNIJVK73\",\"WARC-Block-Digest\":\"sha1:IZT3VBPW5DTG3OG7W6EASBSMEG6VRAGF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305266.34_warc_CC-MAIN-20220127133107-20220127163107-00715.warc.gz\"}"} |
https://math.stackexchange.com/questions/97132/hausdorff-completion-of-a-uniform-space-with-pseudometrics | [
"# Hausdorff completion of a uniform space with pseudometrics.\n\nI'm having trouble constructing the Hausdorff completion of a uniform space $(X,U)$ using pseudometrics. I know that every uniformity on a space $X$ is made by pseudometrics. Here is my idea:\n\nLet $(X,U)$ be a uniformity generated by pseudometrics $(d_i: X \\rightarrow\\mathbb{R}_{\\geq 0})_{i \\in I}$. Define 'Generalized' Cauchy Sequences (GCS) as followed: $(x_n)_{n \\in \\mathbb{N}}$ is a GCS iff $$\\forall \\varepsilon > 0 \\forall K \\subset I \\text{ finite }\\forall i \\in K \\exists n_0 \\forall p,q \\geq n_0 : d_i(x_p,x_q)<\\varepsilon$$ Let $\\mathcal{C}$ be the set of all GCS and define a equivalence relation as followed: $$(x_n)_{n \\in \\mathbb{N}}R(y_n)_{n \\in \\mathbb{N}} \\Leftrightarrow \\forall \\varepsilon > 0 \\forall K \\subset I \\text{ finite }\\forall i \\in K \\exists n_0 \\forall n \\geq n_0 : d_i(x_n,y_q)<\\varepsilon$$\n\nLet $X'$ be the set $\\mathcal{C}/R$. Define pseudometrics $d'_i((x_n)_{n \\in \\mathbb{N}},(y_n)_{n \\in \\mathbb{N}})$ as $\\lim_{n \\rightarrow \\infty} d_i(x_n,y_n)$. Let $k: X \\rightarrow X'$ be the canonical function.\n\nNow, I can prove that $X'$ is Hausdorff (thanks to the equivalence relation), $k(X)$ is dense is $X'$ (every point in $X'$ is the limit of images of points in $X$), but I'm having trouble proving that $X'$ is complete.\n\nBasically, I have 2 questions: is what I'm doing correct (and if so, how do I proceed) or else, what am I doing wrong here (and how to correct it).\n\nAs always, any help would be appreciated.\n\nNotice first that you’ve not gained anything by looking at finite $K\\subseteq I$: your definition of GCS is equivalent to saying that the sequence is $d_i$-Cauchy for every $i\\in I$. The real problem, though, is that in general sequences don’t suffice in uniform spaces: you have to look at nets or filters. Thus, you can’t expect that adding points for sequences that ought to converge will be enough. Let me suggest a different approach (which is the standard one).\nFor $i\\in I$ let $\\langle X_i,d_i\\rangle$ be the pseudometric space with underlying set $X$ and pseudometric $d_i$. For $x,y\\in X$ define $x\\stackrel{i}\\sim y$ iff $d_i(x,y)=0$, let $Y_i$ be the quotient $X_i/\\stackrel{i}\\sim$, let $q_i$ be the quotient map, and let $\\rho_i$ be the metric on $Y_i$ induced by $d_i$: $$\\rho_i\\big(q_i(x),q_i(y)\\big)=d_i(x,y)$$ for any $q_i(x),q_i(y)\\in Y_i$. Let $Y=\\prod_{i\\in I}Y_i$. Then the map\n$$e:X\\to Y:x\\mapsto\\langle q_i(x):i\\in I\\rangle$$\nNow let $\\hat Y_i$ be the metric completion of $Y_i$ for each $i\\in I$. Show that $\\prod_{i\\in I}\\hat Y_i$ is a complete uniform space, and conclude that the closure of $e[X]$ in $\\prod_{i\\in I}\\hat Y_i$ is a Hausdorff completion of $X$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.74438685,"math_prob":0.99985886,"size":1451,"snap":"2022-27-2022-33","text_gpt3_token_len":481,"char_repetition_ratio":0.12093987,"word_repetition_ratio":0.10666667,"special_character_ratio":0.32322535,"punctuation_ratio":0.10472973,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000079,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-14T17:21:46Z\",\"WARC-Record-ID\":\"<urn:uuid:e577c047-93bd-4f4e-88ed-0dbfb7b06c7d>\",\"Content-Length\":\"220963\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b209e6d9-a57a-4dbd-919c-03d4b3ccc799>\",\"WARC-Concurrent-To\":\"<urn:uuid:9238ccec-0aa4-42d2-b745-80ab0a7ad643>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/97132/hausdorff-completion-of-a-uniform-space-with-pseudometrics\",\"WARC-Payload-Digest\":\"sha1:Y5RRISUGV23PKJ6OXOXZCT762P347U7B\",\"WARC-Block-Digest\":\"sha1:XIF5ZC57TG25EHA6AI6EWY2E4G6KL44R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572043.2_warc_CC-MAIN-20220814143522-20220814173522-00210.warc.gz\"}"} |
https://physicsoverflow.org/5984/strings-and-their-masses | [
"#",
null,
"Strings and their masses\n\n+ 3 like - 0 dislike\n1034 views\n\nHow do strings present in particles give mass to them? Is it only by vibrating? I have been trying to find the answer but could not find it anywhere, can this question be answered?\n\nThis post imported from StackExchange Physics at 2014-03-07 16:37 (UCT), posted by SE-user APARAJITA\nretagged Apr 19, 2014\n\n+ 3 like - 0 dislike\n\nWhile it is true that an excited string (hence one with a vibration mode above the ground state) looks like a massive particle from far away, this is not the effect that is supposed to explain the mass of any particle ever seen. This is because the mass of the first excited mode of the string is already huge as far as particle masses go. So in string phenomenology, instead, all particles are modeled by strings in their massless ground state excitation and the actual observed masses are induced, as it should be, by a Higgs effect.\n\nWhile the excited string states are not supposed to show up at energy scales anywhere close to what is being observed, their presence is still crucial: it is all these heavy particle excitations whose appearance as \"virtual particles\" in scattering amplitudes serve to make string scattering amplitudes be loop-wise finite, hence renormalized.\n\nSee the nLab String Theory FAQ the entry How do strings model massive particles?.\n\nThis post imported from StackExchange Physics at 2014-03-07 16:37 (UCT), posted by SE-user Urs Schreiber\nanswered Aug 24, 2013 by (6,095 points)\n+ 2 like - 0 dislike\n\nI presume that you are asking about the mass spectrum of string theories.\n\nThe mass spectrum of a Classical string theory, or the mass of a string is (due to Special Relativity) given, by:\n\n$$m=\\sqrt{-p^\\mu p_\\mu}=\\sqrt N$$\n\nIn natural units $c_0=\\ell_s=\\hbar=1$. Where $N$ is an operator, called the \"Number Operator\". In Classical string theories, this is continuous. When we quantise the theory, we realise that the new mass spectrum is actually given by:\n\n$$m=\\sqrt{N-a}$$\n\nWhere $a$ is called the normal ordering constant. Now, $N$ is going to take discrete values, multiples of $\\frac12$.\n\nIn Bosonic String Theory, $a=1$. In superstring theories, $a$ depends on the sector you' are talking about; it is $0$ in the Neveu-Schwarz sector, and $\\frac12$ in the Ramond sector.\n\nOf course, In GSO Projected theories (i.e. the tachyon is removed (yes, even in the RNS (Ramond-Neveu-Schwarz) Superstring, there are tachyons if you don't GSO Project; although this problem is absent in the GS (Green-Schwarz) Superstring)) , a GSO Projection gets rid of certain states and so on, but let's keep things simple right now.\n\nNow, I've only been talking about open strings. What about the closed strings, which are more important, because the open strings are present only in the Type I Superstring theory (and Bosonic, of course (and probably also Type 0A and 0B (not sure))), whereas the closed strings are there in all string theories?\n\nThe transition happens to be relatively simple.\n\nYou replace $N$ with / $N+\\tilde N$ and $a$ with $a+\\tilde a$.\n\nEDIT\n\nI also see that in your post, you say \"strings in particles\". Actually, the particles themselves are strings. And they get their mass as per the vibrational modes $\\alpha,\\tilde\\alpha,d,\\tilde{d}$of the string with the Number operator $N$ given by\n\n$$N = \\sum\\limits_{n = 1}^\\infty {{{\\hat \\alpha }_{ - n}}\\cdot{{\\hat \\alpha }_n}} + \\sum\\limits_{r/2 = 1}^\\infty {{{\\hat d}_{ - r}}\\cdot{{\\hat d}_r}}$$.\n\nanswered Aug 23, 2013 by (1,985 points)\nWhat I actually meant was that, particles gain mass by interacting with Higgs, it also attains mass due to the kinetic energy of gluons and Also due to the vibration of string in them, do all this factor give mass to a single particle?\n\nThis post imported from StackExchange Physics at 2014-03-07 16:37 (UCT), posted by SE-user APARAJITA\n@APARAJITA: Actually, the vibration of the string (overall given by $N$) is what gives the mass. As for the higgs, it's just that the interaction kind of \"gives it this same mass\". But note, the Higgs has geometric interpretations in string theory. As for the gluons, it's actually the potential energy, and it's the energy of aan entire composite particle, like a proton, not an individual elementary particle, like the quark, whose mass can be determined by the manipulations in the answer.\n\nThis post imported from StackExchange Physics at 2014-03-07 16:37 (UCT), posted by SE-user Dimensio1n0\n@APARAJITA: See this post by Lubos Motll; the last section about \"String theory's diverse geometric ways to look at Higgs bosons \" , . There are actually 3 different such interpretations.\n\nThis post imported from StackExchange Physics at 2014-03-07 16:37 (UCT), posted by SE-user Dimensio1n0\n@APARAJITA the issue about the geometric interpretation of the higgs in ST could be a good new question ;-)\n\nThis post imported from StackExchange Physics at 2014-03-07 16:37 (UCT), posted by SE-user Dilaton\nBut according to dimension10 it has already been answered by lubosmoti, though I have not checked it yet\n\nThis post imported from StackExchange Physics at 2014-03-07 16:37 (UCT), posted by SE-user APARAJITA\n Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the \"link\" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\\varnothing$ in the following word:p$\\hbar$ysicsOver$\\varnothing$lowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register."
] | [
null,
"https://physicsoverflow.org/qa-plugin/po-printer-friendly/print_on.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86295044,"math_prob":0.86496764,"size":1944,"snap":"2023-40-2023-50","text_gpt3_token_len":546,"char_repetition_ratio":0.11907216,"word_repetition_ratio":0.0,"special_character_ratio":0.27623457,"punctuation_ratio":0.12144703,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9830366,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T08:45:19Z\",\"WARC-Record-ID\":\"<urn:uuid:0a73488b-17da-4a98-8623-969c88261d72>\",\"Content-Length\":\"159578\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:45ecdc38-9c11-4282-ab93-30fd1dd5d908>\",\"WARC-Concurrent-To\":\"<urn:uuid:3f74bffa-014a-4bc5-8adc-ce6a28779405>\",\"WARC-IP-Address\":\"129.70.43.86\",\"WARC-Target-URI\":\"https://physicsoverflow.org/5984/strings-and-their-masses\",\"WARC-Payload-Digest\":\"sha1:NDPBYKC3SWGQV22YTQ5RR5JG3AQCDCV2\",\"WARC-Block-Digest\":\"sha1:PAYI7NYSBITOYR26LV6AGBAPPWCMR5QQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506339.10_warc_CC-MAIN-20230922070214-20230922100214-00289.warc.gz\"}"} |
https://www.physicsforums.com/threads/weierstrass-approach-to-analysis.82257/ | [
"# Weierstrass approach to analysis\n\nGold Member\nhow does weierstrass' approach to analysis differ from the classical approach, i.e leibnizs' and newton's calculus?\n\nHomework Helper\nGold Member\nDearly Missed\nSince Weierstrass was the one systematizing the \"epsilon/delta\"-approach to calculus, you might say that Weierstrass was the first to provide a truly mature and rigorous approach to calculus.\n\nHomework Helper\nGold Member\nAnd how does Cauchy fit in that picture? I thought he was the one who invented the epsilon-delta formulation\n\nHomework Helper\nGold Member\nDearly Missed\nquasar987 said:\nAnd how does Cauchy fit in that picture? I thought he was the one who invented the epsilon-delta formulation\nI said that Weierstrass SYSTEMATIZED the use of this technique; I didn't say he was the the first to have these ideas.\n\nStaff Emeritus\nGold Member\nDearly Missed\nWeierstrasse and his students found Cauchy's proofs imperfect and redid them. That's why we have, e.g. the Cauchy-Kawalevsky theorem.\n\nHomework Helper\nGold Member\nCool.\n\n--------------\n\nHomework Helper\nGold Member\nI thought Cauchy was the only mastermind behind epsilon-delta calculus. Did Weierstrass and friends also discovered new basics theorems that Cauchy had missed?\n\nHomework Helper\nGold Member\nDearly Missed\nI think that some of the major problems in Cauchy's original proofs were due to the lack of precise convergence concepts, like pointwise convergence vs. uniform convergence.\n\nHomework Helper\nthe first reasonably clear definition of continuity may be due to Bolzano, who wrote in 1817 the following definition (in German): f is continuous at x, if and only if \"...the difference f(x+h)-f(x) can be made smaller than any given quantity, if h is taken sufficiently small.\"\n\nIn the following proof of the intermediate value theorem he then uses an epsilon for the \"given quantity\".\n\nCauchy, writing 6 years later, in 1823, gives the following somewhat less clear definition: \" ..the magnitude of the difference f(x+h)-f(x) decreases indeifinitely with that of h.\"\n\nHe clarifies this somewhat as follows: I.e. \"an infinitesimal increment in the variable produces an infinitesimal increment in the function itself.\" where he has explained that an infinitesimal is a quantity whose \"successive absolute values decrease indefinitely so as to become less than any given quantity.\"\n\nthe modern definition which needs only to write the letter epsilon for bolzano's \"given quantity\", as he himself does in his proofs, was published first by heine, a student of weierstrass, in 1874, following lectures of weierstrass.\n\nit is difficult for a modern person to see the real difference in some of these definitions, once one knows what they mean. Indeed I have seen quotations form newton which sounded essentially like the modern definition of limit. at least one is easily persuaded that the master understood the true meaning very well.\n\nindeed if one spells out the meaning of cauchy's definition as he himself gave it, it is the same. so it seems to me that these workers did themselves understand the meaning of continuity as we do today.\n\nthere seems little doubt however that, as has been stated, cauchy did not appreciate the distinction between continuity and uniform continutiy, nor that between convergence and uniform convergence, and made errors or at least omissions of that nature.\n\nthe translations i have used here are from \"A Source book of classical analysis\", Harvard university Press, edited by Garrett Birkhoff. :tongue:\n\nas to why weierstrass and not bolzano gets credit for epsilon - delta continuity, it seems there is a distinction between originating a concept and influencing others to do so. i.e. bolzano may have led the way himself, but did so in papers devoted almost exclusively to such foundations, so few followed, while others like cauchy were more interested in applying these notions to questions about integrals and series.\n\nthus people interested in cauchy's theorems gave him credit for introducing the new methods he was using, even though those were in fact more primitive than the earlier ones of bolzano. finally it seems weierstrass and his students used almost the same formulation as bolzano but applied it to current topics of interest.\n\nit is odd for example that a theorem could be known as the bolzano - weierstrass theorem, when the two men worked some 50 years apart. possibly it was discovered first by bolzano but rediscovered and popularized by weierstrass. (of course there is also a stone - weierstrass theorem and a riemann - kempf theorem, and a gauss - bonnet - chern theorem,.....,in which there is 80-120 years separating the two workers, but in those cases the new results mentioned are significant generalizations of the older ones. In fact a friend of mine once asked Stone just when he had worked with Weierstrass.)\n\non another thread there is a principle mentioned called there the \"arnol'd principle\", that if a certain concept or theorem carries a person's name, then it is almost certain that person did not originate that principle. it is then mentioned that indeed arnol'd is not responsible for this principle.\n\nLast edited:\nHomework Helper\nby the way bolzano clearly states and fairly well proves the so called \"cauchy criterion\" for convergence in 1817. see p.15 of the source book cited above.\n\nGold Member\narildno said:\nSince Weierstrass was the one systematizing the \"epsilon/delta\"-approach to calculus, you might say that Weierstrass was the first to provide a truly mature and rigorous approach to calculus.\nfrom what i read his approach had left behind the infinitisemals for the the epsilon-delta formulation, but i also read that abraham robinson had resurrected the infinitisimal formulation on a better rigorous formulation, in what now is called Non-Standard Analysis (NSA), how do these approaches differ from eachother, and what makes NSA non-standard?\n\nHomework Helper\nGold Member\nDearly Missed\nloop quantum gravity said:\nfrom what i read his approach had left behind the infinitisemals for the the epsilon-delta formulation, but i also read that abraham robinson had resurrected the infinitisimal formulation on a better rigorous formulation, in what now is called Non-Standard Analysis (NSA), how do these approaches differ from eachother, and what makes NSA non-standard?\nI hope math-wizzes like Hurkyl, M.G, or mathwonk can give you a bit of solid info on robinson's approach, but here's a few schematic details on the history of analysis that I don't think is too misleading:\n\n1. As mathwonk has said, the limit concept has been around a long time; formulations by Newton and Bolzano is at times essentially indistinguishable from modern versions. (I would also like to include Archimedes here; his ideas isn't really that far from modern ideas, and the proofs he gives are at times, I believe, up to modern standards of rigour).\n\n2. However, it is by Cauchy we find the origin of the epsilon/delta-formulation (by which ideas of infinitesemals became superfluous), but his texts are at times not careful enough to distinguish between various forms of continuity/convergence (see mathwonk's reply)\n\n3. Weierstrass and his students recognized the difficulties in Cauchy's original work, and took upon themselves to give his maths a major overhaul to make it fully rigorous\n\n4. Robinson realized that it was possible to revive the concepts of infinitesemals, but that in order to do so properly and rigorously, he had to \"leave\" the number system called \"reals\", and invent a number system sufficiently subtle to include \"infinitesemals\" (I think this goes under the name of surreals, but I'm not too sure on that issue).\nSo, his first task was to develop a good axiomatic structure governing his new number system, and then \"translate\" the concepts from standard analysis (which lives in the real (or complex) number system)) into his own number system.\n\nGold Member\nsurreal numbers are john conway's innovation, i think robinson's is the hyperreal numbers.\nanyway, ive read also about someone's interpratation that replaces the number system to decimals (from the reals), although it puzzled me bacause decimals are just different representatives of the reals, isnt it?!\nill probably need to find it once more to undersatnad what it's really about.\n\nStaff Emeritus\nGold Member\nloop quantum gravity said:\nfrom what i read his approach had left behind the infinitisemals for the the epsilon-delta formulation, but i also read that abraham robinson had resurrected the infinitisimal formulation on a better rigorous formulation, in what now is called Non-Standard Analysis (NSA), how do these approaches differ from eachother, and what makes NSA non-standard?\n\nReal numbers can be constructed from natural numbers. Hyperreal numbers, extensions of the real numbers on which non-standard analysis is based, are constructed from non-standard models of the natural numbers.\n\nAn elementary calculus book,\n\n\"Elementary Calculus: An Approach Using Infinitesimals\nOn-line Edition, by H. Jerome Keisler\n\nThis is a calculus textbook at the college Freshman level based on Abraham Robinson's infinitesimals, which date from 1960. Robinson's modern infinitesimal approach puts the intuitive ideas of the founders of the calculus on a mathematically sound footing, and is easier for beginners to understand than the more common approach via limits.\n\nThe First Edition of this book was published in 1976, and a revised Second Edition was published in 1986, both by Prindle, Weber & Schmidt. The book is now out of print and the copyright has been returned to me as the author. I have decided (as of September 2002) to make the book available for free in electronic form at this site. These PDF files were made from the printed Second Edition.\",\n\nbased on non-standard analysis is available here. Hyperreal numbers are first discussed in a section that begins on page 21. The Epilogue relates non-standard analysis to the standard $\\epsilon - \\delta$ view of limits.\n\nWhat little I used to know about the relationship between hyperreals and surreals, I, unfortunately, have forgotten.\n\nRegards,\nGeorge\n\nStaff Emeritus\nGold Member\nThe surreals are, essentially, the \"master\" ordered field: every small ordered field can be embedded in the surreal numbers.\n\n(small simply means that the elements of the field will fit into a set)\n\nBut since the surreals are so large (they do not fit into a set), they're fairly unwieldy.\n\nThe reason nonstandard analysis is called \"nonstandard\" comes from model theory. I could, for example, create a theory by writing down a list of all the axioms of the real numbers in first-order logic. Then, the set of real numbers can be used to form a model of this theory.\n\nHowever, the real numbers are not the only model of these axioms! There are other models, like the real algebraic numbers, or the hyperreal numbers. These other models are called non-standard models. Internally, these other models are indistinguishable from the reals; you can only tell them apart by appealing to the set theory in which these models reside. (i.e. looking at it externally)\n\nNon-standard analysis is based upon studying the relationship between a standard model of analysis (built upon the reals) and a non-standard model of analysis (built upon the hyperreals), thus the name.\n\nGold Member\nhere's the paper i mentioned in my previous post, the author of the ideas listed there is prof alexander abian:\nhttp://www.fidn.org/notes1.pdf [Broken]\n\nLast edited by a moderator:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9521577,"math_prob":0.79737467,"size":1932,"snap":"2022-27-2022-33","text_gpt3_token_len":447,"char_repetition_ratio":0.13589212,"word_repetition_ratio":0.510274,"special_character_ratio":0.19254659,"punctuation_ratio":0.094827585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9644689,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T22:18:20Z\",\"WARC-Record-ID\":\"<urn:uuid:1f939a3a-8b3e-4d47-9d77-0731b9f9035f>\",\"Content-Length\":\"113319\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68e2f9be-e234-4b33-ae43-2cfdad2ede1d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5143a23f-2a48-4666-90d7-92dcb436ae21>\",\"WARC-IP-Address\":\"104.26.15.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/weierstrass-approach-to-analysis.82257/\",\"WARC-Payload-Digest\":\"sha1:AHUWOXOODXSPI4BIXK4XBAUFBEFH6ZYJ\",\"WARC-Block-Digest\":\"sha1:APPRWDPNOUA4XNGPUAC5W2ZM4YSE4KLA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572581.94_warc_CC-MAIN-20220816211628-20220817001628-00117.warc.gz\"}"} |
http://www.radarinfo.xyz/2018/11/tech-explorations-basic-electronics-for.html | [
"-->",
null,
"# Tech Explorations™ Basic electronics for Arduino Makers\n\n100% OFF UDEMY COUPON UDEMY DISCOUNT UDEMY COURSES IT & SOFTWARE\nCoupon Online Course Details\nTech Explorations™ Basic electronics for Arduino Makers, An introduction to electronics to help you make the most from your Arduino or other prototyping platform.\n\nCreated by Dr. Peter Dalmaris\n\nPreview This Course - GET COUPON CODE\n\nWhat you'll learn\n• Understand the concepts of voltage, resistance and current\n• Use Ohm's Law to calculate voltage, current and resistance\n• Use Kirchhoff's Laws to calculate voltage and current\n• Understand the meaning of and calculate energy and power\n• Use resistors in various configurations, like in voltage dividers and voltage ladders\n• Read the value of a resistor from its package\n• Use pull-up and pull-down resistors\n• Understand the use of capacitors\n• Use capacitors as energy stores and filters\n• Calculate the RC time constant of a capacitor\n• Understand diodes\n• Measure a diode's voltage drop\n• Understand how to use rectifier and zener diodes\n• Protect a circuit from reverse polarity\n• Understand how to use a transistor to control low and high power loads\n• Calculate the currents and base resistor for a bipolar transistor\n• Use the correct voltage regulator for any circuit"
] | [
null,
"https://2.bp.blogspot.com/-6CnMZTvqEw0/W-rWHto5npI/AAAAAAAASXI/xKX1TrUCw9cFMSz8GWzYbr6t1TLgiFETACLcBGAs/s640/945826_b10b.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.75093114,"math_prob":0.7972482,"size":1235,"snap":"2021-43-2021-49","text_gpt3_token_len":278,"char_repetition_ratio":0.14459789,"word_repetition_ratio":0.01,"special_character_ratio":0.19352227,"punctuation_ratio":0.031088082,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9869723,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T15:29:48Z\",\"WARC-Record-ID\":\"<urn:uuid:2f714d3d-337b-415c-b362-30b7a7d30309>\",\"Content-Length\":\"114499\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a80a4f04-a9db-45d0-805d-a9e4c31fdada>\",\"WARC-Concurrent-To\":\"<urn:uuid:87d69bec-2191-4351-8aec-264426b8a626>\",\"WARC-IP-Address\":\"142.250.81.211\",\"WARC-Target-URI\":\"http://www.radarinfo.xyz/2018/11/tech-explorations-basic-electronics-for.html\",\"WARC-Payload-Digest\":\"sha1:LJKFA5HMCDOU34KU5P4JBDMUPQEEYGAZ\",\"WARC-Block-Digest\":\"sha1:LBPZWTQ7DDH3UT6SUTCBSUNRI5VOFRGV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358774.44_warc_CC-MAIN-20211129134323-20211129164323-00136.warc.gz\"}"} |
https://ufal.mff.cuni.cz/pdt2.0/doc/manuals/en/t-layer/html/ch08s16s02.html | [
"### 16.2. Operators\n\nOperators are defined as (linking) expressions for the expression of mathematical operations and intervals. In terms of their form they are both subordinating and co-ordinating expressions. Operators comprise in particular:\n\n• signs for mathematical operations (+) and also their lexical forms (plus (=plus)).\n\n• the dash as a punctuation mark or the comma for the expression of an interval.\n\n• the expression až (=until) expressing an interval.\n\n• a combination of prepositions expressing an interval. For example: od - přes -do (=from - through -to), od - po (=from - to), počínaje - konče (=beginning - ending).\n\nThe rules for the annotation of mathematical operations and intervals are described in Section 11, \"Mathematical operations and intervals\".\n\nAn operator is represented in the tectogrammatical tree by a separate node only in cases where mathematical operations or intervals are represented as a paratactic structure. The node representing an operator then stands for the root node of this paratactic structure (`functor`=`OPER`).\n\nT-lemma. The t-lemma of the node representing the operator of the mathematical operation is the sign of the relevant mathematical operation, either in the form of the t-lemma substitute (`#Colon` for the colon, `#Slash` for the slash) or (in a case where a t-lemma substitute has not been provided for a given sign) in the form in which it occurs in the text: thus either as a symbol (for example: +), or as a word (for example: plus (=plus)).\n\nIn the t-lemma of the node for the operator of the interval realised by a combination of prepositional phrases the respective prepositions are joined by an underscore (od_přes_do (=from_through_to); see also the section on multi-word t-lemmas Section 3, \"T-lemmas of multi-word (complex) lexical units\"). Individual combinations of prepositions expressing an interval are realised at surface level by various formal variants (od - do (=from - to), od - po (=from - to), od - k (=from - towards)). For the respective formal variants one representative variant is always selected, recorded as the t-lemma of the node the operator represents in the tectogrammatical tree (od_do (=from - to)).\n\nThe expression \"\". The expression až (=until) has a dual function in the expression of mathematical operations and intervals:\n\n• an operator.\n\nThe expression až (=until) is an operator with the meaning of an interval in examples such as:\n\npět .`OPER` deset bodů (=five to ten points)\n\nobdobí 1938 .`OPER` 1954 (=the period from 1938 till 1954)\n\nIf the expression až (=until) is an operator, it is represented by a node which is the root node of the paratactic structure and has the functor `OPER` (`t_lemma`=).\n\n• an expression modifying an operator.\n\nThe expression až (=until) is a conjunction (operator) modifier which emphasises the second (final) boundary of the interval in cases where the interval is realised by a combination of prepositional phrases.\n\nIf the expression až (=till) is a conjunction (operator) modifier it is represented by a node which is a direct daughter node of the root node of the paratactic structure and has the functor `CM` (`t_lemma`=). The value `0` is entered in the attribute `is_member`.\n\nThe rule is that although this node is a direct daughter node of the root node of the paratactic structure and in the attribute `is_member` it has the value `0`, it is not a shared modifier of the conjuncts and it does not modify the mother of the root node of the paratactic structure.\n\nExample:\n\nod pěti .`CM` do deseti bodů (=from five to ten points)\n\nCf:\n\n• 1 plus 1 (=1 plus 1)\n\nThe operátor will be represented by a node with the t-lemma plus.\n\n• byt 1+1 (=lit. appartment 1+1)\n\nThe operator will be represented by the node + (a t-lemma substitute has not been introduced for the mathematical addition sign).\n\n• Utkání skončilo 2 : 0 (=The match ended 2:0).\n\nThe operator will be represented by a node with the t-lemma `#Colon` (the node representing the colon (with whatever meaning) has the t-lemma substitute `#Colon`; see Section 4, \"T-lemma substitutes\").\n\n• od deseti do osmdesáti procent (=from ten to eighty percent)\n\nPrepositional operators will be represented by a single node with the representative t-lemma od_do.\n\n• počínaje složitou dopravou na Strahov, přeplněným parkovištěm, .`CM` po dlouhé fronty na lístky (=starting with the complicated travel to Strahov, the overflowing car park and then the long queues for tickets.)\n\nPrepositional operators will be represented by one node with the representative t-lemma od_do. The expression až (=till) will be represented as a conjunction (operator) modifier.\n\n• od notebooků přes stolní modely .`CM` po víceprocesorové servery (=from notebooks to desktop models to multi-processor servers)\n\nPrepositional operators will be represented by one node with the representative t-lemma od_přes_do. The expression až (=till) will be represented as a conjunction (operator) modifier."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.77042085,"math_prob":0.9112415,"size":4882,"snap":"2023-14-2023-23","text_gpt3_token_len":1165,"char_repetition_ratio":0.16810168,"word_repetition_ratio":0.15414013,"special_character_ratio":0.22183532,"punctuation_ratio":0.07906977,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961315,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T05:19:08Z\",\"WARC-Record-ID\":\"<urn:uuid:9c838f9e-9de7-4288-b38d-3ef170a7d43f>\",\"Content-Length\":\"10858\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a269c5fb-8114-469b-b9d9-a33186f6acb5>\",\"WARC-Concurrent-To\":\"<urn:uuid:9bb107e8-5669-444c-8a71-3888e60f847b>\",\"WARC-IP-Address\":\"195.113.20.52\",\"WARC-Target-URI\":\"https://ufal.mff.cuni.cz/pdt2.0/doc/manuals/en/t-layer/html/ch08s16s02.html\",\"WARC-Payload-Digest\":\"sha1:EKUWXVYXEONRN4PLCQX4FVWBCMUPUYSZ\",\"WARC-Block-Digest\":\"sha1:KLRZTI3Y6KLENYBH2PNV2UFBRNIGUKSN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649105.40_warc_CC-MAIN-20230603032950-20230603062950-00686.warc.gz\"}"} |
https://www.combinatorics.org/ojs/index.php/eljc/article/view/v14i1r57 | [
"# A Closed Formula for the Number of Convex Permutominoes\n\n• Filippo Disanto\n• Andrea Frosini\n• Renzo Pinzani\n• Simone Rinaldi\n\n### Abstract\n\nIn this paper we determine a closed formula for the number of convex permutominoes of size $n$. We reach this goal by providing a recursive generation of all convex permutominoes of size $n+1$ from the objects of size $n$, according to the ECO method, and then translating this construction into a system of functional equations satisfied by the generating function of convex permutominoes. As a consequence we easily obtain also the enumeration of some classes of convex polyominoes, including stack and directed convex permutominoes.\n\nPublished\n2007-08-20\nIssue\nArticle Number\nR57"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8698476,"math_prob":0.9434819,"size":646,"snap":"2021-31-2021-39","text_gpt3_token_len":149,"char_repetition_ratio":0.16199377,"word_repetition_ratio":0.0,"special_character_ratio":0.19349845,"punctuation_ratio":0.055045873,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98873585,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T13:45:23Z\",\"WARC-Record-ID\":\"<urn:uuid:135a2e3a-e526-4767-9091-791057324924>\",\"Content-Length\":\"13771\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:10f53a52-5637-4972-ba57-b3627d33118e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e81d23e-e727-4ad7-9b70-88886d399530>\",\"WARC-IP-Address\":\"150.203.186.177\",\"WARC-Target-URI\":\"https://www.combinatorics.org/ojs/index.php/eljc/article/view/v14i1r57\",\"WARC-Payload-Digest\":\"sha1:IVEZ4D6LAVZWKBF6KBLBCC56GKX3QKZC\",\"WARC-Block-Digest\":\"sha1:ZN6YMSXGSC64WRVNK3N2FF5O2HASP2IJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057039.7_warc_CC-MAIN-20210920131052-20210920161052-00258.warc.gz\"}"} |
https://programmer.group/web-form-data-extraction-based-on-interface-crawler.html | [
"# Web Form Data Extraction Based on Interface Crawler\n\nKeywords: Programming SQL Java Linux Oracle\n\nI recently received a task, to climb a data, the data in a web page table, the amount of data hundreds. Open debugging mode and find that the interface returns an html page, as long as it is treated as string. (xpath crawler is troublesome for parsing html files) The scheme uses regular matching of all cell rows and extracting cell content, which encounters some other problems:\n\n1. Originally, the content was extracted directly, and it was found that the content involved the languages and characters of various countries, which was a bit pitfalls.\n\n2. After intercepting the cell lines, we find that there are spaces between the contents of the two fields, and the number is uncertain. spit method is used to limit the size of the array.\n\n3. Incorrect coding format leads to scrambling\n\nShare the code for your reference:\n\n``````public static void main(String[] args) {\n\nString url = \"https://docs.oracle.com/cd/E13214_01/wli/docs92/xref/xqisocodes.html\";\nHttpGet httpGet = getHttpGet(url);\nJSONObject httpResponse = getHttpResponse(httpGet);\nString content = httpResponse.getString(\"content\");\nList<String> strings = regexAll(content, \"<tr.+</a>\" + LINE + \".+\" + LINE + \".+\" + LINE + \".+\" + LINE + \".+\" + LINE + \".+\" + LINE + \"</div>\");\nint size = strings.size();\nfor (int i = 0; i < size; i++) {\nString s = strings.get(i).replaceAll(\"<.+>\", EMPTY).replaceAll(LINE, EMPTY);\nString[] split = s.split(\" \", 2);\nString sql = \"INSERT country_code (country,code) VALUES (\\\"%s\\\",\\\"%s\\\");\";\noutput(String.format(sql, split.replace(SPACE_1, EMPTY), split.replace(SPACE_1, EMPTY)));\n}\ntestOver();\n}\n``````\n\nSome of the packaging methods are as follows:\n\n``````/**\n* Returns all matches\n*\n* @param text Text that needs to be matched\n* @param regex regular expression\n* @return\n*/\npublic static List<String> regexAll(String text, String regex) {\nList<String> result = new ArrayList<>();\nPattern pattern = Pattern.compile(regex);\nMatcher matcher = pattern.matcher(text);\nwhile (matcher.find()) {\n}\nreturn result;\n}\n``````\n\nThe sql part of the final stitching results are as follows:\n\n``````INSERT country_code (country,code) VALUES (\"German\",\"de\");\nINSERT country_code (country,code) VALUES (\"Greek\",\"el\");\nINSERT country_code (country,code) VALUES (\"Greenlandic\",\"kl\");\nINSERT country_code (country,code) VALUES (\"Guarani\",\"gn\");\nINSERT country_code (country,code) VALUES (\"Gujarati\",\"gu\");\nINSERT country_code (country,code) VALUES (\"Hausa\",\"ha\");\nINSERT country_code (country,code) VALUES (\"Hebrew\",\"he\");\nINSERT country_code (country,code) VALUES (\"Hindi\",\"hi\");\nINSERT country_code (country,code) VALUES (\"Hungarian\",\"hu\");\nINSERT country_code (country,code) VALUES (\"Icelandic\",\"is\");\nINSERT country_code (country,code) VALUES (\"Indonesian\",\"id\");\nINSERT country_code (country,code) VALUES (\"Interlingua\",\"ia\");\nINSERT country_code (country,code) VALUES (\"Interlingue\",\"ie\");\nINSERT country_code (country,code) VALUES (\"Inuktitut\",\"iu\");\nINSERT country_code (country,code) VALUES (\"Inupiak\",\"ik\");\nINSERT country_code (country,code) VALUES (\"Irish\",\"ga\");\nINSERT country_code (country,code) VALUES (\"Italian\",\"it\");\nINSERT country_code (country,code) VALUES (\"Japanese\",\"ja\");\n``````\n\n# Click on the Public Number Map\n\nPosted by GameMusic on Wed, 11 Sep 2019 20:00:11 -0700"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6838038,"math_prob":0.68106836,"size":4242,"snap":"2020-24-2020-29","text_gpt3_token_len":972,"char_repetition_ratio":0.21967909,"word_repetition_ratio":0.033928573,"special_character_ratio":0.2553041,"punctuation_ratio":0.1824611,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9518601,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-26T04:08:40Z\",\"WARC-Record-ID\":\"<urn:uuid:5b02d3de-9b6f-4b24-84e5-106c9f1a67a8>\",\"Content-Length\":\"12792\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:53cd9e0b-191a-4bd3-ae52-9230f120d3da>\",\"WARC-Concurrent-To\":\"<urn:uuid:0fe23b76-6601-4de4-949c-1ad4ea38caf8>\",\"WARC-IP-Address\":\"174.137.48.86\",\"WARC-Target-URI\":\"https://programmer.group/web-form-data-extraction-based-on-interface-crawler.html\",\"WARC-Payload-Digest\":\"sha1:LQPE47LRIHFVCFYTPROK7LI5NZRWJ3UP\",\"WARC-Block-Digest\":\"sha1:I2X5ULKGKMC2U6TK27KB2TEPSSWDQIIJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347390442.29_warc_CC-MAIN-20200526015239-20200526045239-00402.warc.gz\"}"} |
https://ask.truemaths.com/question/a-person-standing-on-the-bank-of-a-river-observes-that-the-angle-subtended-by-a-tree-on-the-opposite-bank-is-600-when-he-retires-20-m-from-the-bank-he-finds-the-angle-to-be-300-find-the-height-of-t/ | [
"Guru\n\n# A person standing on the bank of a river observes that the angle subtended by a tree on the opposite bank is 600; when he retires 20 m from the bank, he finds the angle to be 300. Find the height of the tree and the breadth of the river.\n\n• 0\n\nsir this is the question from the book -ML aggarwal( avichal publication) class 10th , chapter20 , heights and distances\nthis question once has been asked in exam . and have chances to come in future .\nwe know A person standing on the bank of a river observes that the angle subtended by a tree on the opposite bank is 600;\nwhen he retires 20 m from the bank,\nhe finds the angle to be 300.\nwehave to Find the height of the tree and the breadth of the river.\nquestion no 17 , heights and distances , ICSE board\n\nShare\n\n1. Consider TR as the tree and PR as the width of the river.",
null,
"Take TR = X and PR = y\n\nIn right triangle TPR\n\ntan θ = TR/PR\n\nSubstituting the values\n\ntan 600 = x/y\n\nSo we get\n\n√3 = x/y\n\nx = y √3…… (1)\n\nIn right triangle TQR\n\ntan 300 = TR/QR\n\ntan 300 = x/(y + 20)\n\nWe get\n\n1/√3 = x/(y + 20)\n\nx = (y + 20)/ √3 ….. (2)\n\nUsing both the equations\n\ny√3 = (y + 20)/ √3\n\nSo we get\n\n3y = y + 20\n\n3y – y = 20\n\n2y = 20\n\ny = 10\n\nNow substituting the value of y in equation (1)\n\nx = 10 × √3 = 10 (1.732) = 17.32\n\nHence, the height of the tree is 17.32 m and the width of the river is 10 m.\n\n• 0"
] | [
null,
"https://cdn1.byjus.com/wp-content/uploads/2020/05/ml-aggarwal-solutions-for-class-10-chapter-20-image-17.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87629133,"math_prob":0.99796987,"size":1417,"snap":"2022-40-2023-06","text_gpt3_token_len":455,"char_repetition_ratio":0.13305025,"word_repetition_ratio":0.38511327,"special_character_ratio":0.3422724,"punctuation_ratio":0.07643312,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99902415,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T02:25:50Z\",\"WARC-Record-ID\":\"<urn:uuid:fb1b1d05-9c34-42c0-b49c-a1a630969e01>\",\"Content-Length\":\"161345\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a3536cf-fc90-4e5c-b5a6-1f9aa3d9bda3>\",\"WARC-Concurrent-To\":\"<urn:uuid:75a722a0-111d-4b55-b08e-b2bb736ad282>\",\"WARC-IP-Address\":\"104.21.63.18\",\"WARC-Target-URI\":\"https://ask.truemaths.com/question/a-person-standing-on-the-bank-of-a-river-observes-that-the-angle-subtended-by-a-tree-on-the-opposite-bank-is-600-when-he-retires-20-m-from-the-bank-he-finds-the-angle-to-be-300-find-the-height-of-t/\",\"WARC-Payload-Digest\":\"sha1:76O6L4W6N4TZMCP5W2B3XYOUQLKRFNW6\",\"WARC-Block-Digest\":\"sha1:DXCVZQMQGMPTSJRXGJQR25LX4X4FZ554\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494852.95_warc_CC-MAIN-20230127001911-20230127031911-00510.warc.gz\"}"} |
https://math.icalculator.com/ratio-calculators/32.4:23.3.html | [
"# Equivalent Ratios of 32.4:23.3\n\nEquivalent ratios or equal ratios are two ratios that express the same relationship between numbers as we covered in our tutorial on scaling up ratios. You can use the equivalent ratio calculator to solve ratio and/or proportion problems as required by entering your own ratio to produce a table similar to the \"Equivalent Ratios of 32.4:23.3 table\" provided below. This ratio table provides an increasingly list of ratios of the same proportions where the numerator and denominator are a direct multiplication of the multiplying value (mx). Ratio tables are very useful in math for calculating and comparing equivalent ratios, although most will likely use a ratio calculator to calculate equivalent ratios, it is also useful to have a ratio table where you can quickly cross reference associated ratios, particularly when working with complex math equations to resolve advanced math problems or physics problems. As a useful reference, we have included a table which provides links to the associated ratio values for the ratio 32.4:23.3, for example 32.4.1:23.3, 32.4:23.3.1, 32.4.1:23.3.2 and so on. We hope you will find these quick reference ratio tables useful as you can print and email them to yourself to aid your learning or a useful learning aide when teaching ratios to math students.\n\nLooking for a different type of ratio calculator or tutorial? Use the quick links below to access more ratio calculators\n\n 32.4 : 23.3(m1 = 1) 64.8 : 46.6(m2 = 2) 97.2 : 69.9(m3 = 3) 129.6 : 93.2(m4 = 4) 162 : 116.5(m5 = 5) 194.4 : 139.8(m6 = 6) 226.8 : 163.1(m7 = 7) 259.2 : 186.4(m8 = 8) 291.6 : 209.7(m9 = 9) 324 : 233(m10 = 10) 356.4 : 256.3(m11 = 11) 388.8 : 279.6(m12 = 12) 421.2 : 302.9(m13 = 13) 453.6 : 326.2(m14 = 14) 486 : 349.5(m15 = 15) 518.4 : 372.8(m16 = 16) 550.8 : 396.1(m17 = 17) 583.2 : 419.4(m18 = 18) 615.6 : 442.7(m19 = 19) 648 : 466(m20 = 20) 680.4 : 489.3(m21 = 21) 712.8 : 512.6(m22 = 22) 745.2 : 535.9(m23 = 23) 777.6 : 559.2(m24 = 24) 810 : 582.5(m25 = 25) 842.4 : 605.8(m26 = 26) 874.8 : 629.1(m27 = 27) 907.2 : 652.4(m28 = 28) 939.6 : 675.7(m29 = 29) 972 : 699(m30 = 30) 1004.4 : 722.3(m31 = 31) 1036.8 : 745.6(m32 = 32) 1069.2 : 768.9(m33 = 33) 1101.6 : 792.2(m34 = 34) 1134 : 815.5(m35 = 35) 1166.4 : 838.8(m36 = 36) 1198.8 : 862.1(m37 = 37) 1231.2 : 885.4(m38 = 38) 1263.6 : 908.7(m39 = 39) 1296 : 932(m40 = 40) 1328.4 : 955.3(m41 = 41) 1360.8 : 978.6(m42 = 42) 1393.2 : 1001.9(m43 = 43) 1425.6 : 1025.2(m44 = 44) 1458 : 1048.5(m45 = 45) 1490.4 : 1071.8(m46 = 46) 1522.8 : 1095.1(m47 = 47) 1555.2 : 1118.4(m48 = 48) 1587.6 : 1141.7(m49 = 49) 1620 : 1165(m50 = 50)\n\nDid you find the table of equivalent ratios of 32.4:23.3 useful? Please leave a rating below.\n\n## How to Calculate Ratios\n\nWhen calculating equivalent ratios you must multiply or divide both numbers in the ratio. This keeps both numbers in direct relation to each other. So, a ratio of 2/3 has an equivalent ratio of 4/6: in this ratio calculation we simply multiplied both 2 and 3 by 2.\n\n## Mathematical facts about the ratio 32.4:23.3\n\nThe numerator of the ratio 32.4:23.3 contains 1 decimal and the denominator contains 1 decimal\n\nThe lowest possible whole number equivalent ratio of the ratio 32.4:23.3 is:\n\n3.5604395604396 : 2.5604395604396\n\nIf you wish to express the ratio 32.4:23.3 as n to 1 then the ratio would be:\n\n32.4:23.3 as n to 1\n= 1.3905579399142 : 1\n\nIf you wish to express the ratio 32.4:23.3 as 1 to n then the ratio would be:\n\n32.4:23.3 as 1 to n\n= 1 : 0.71913580246914\n\nThe ratio 32.4:23.3 expressed as a fraction is [calculated using the ratio to fraction calculator]:\n\n32.4:23.3\n= 3.5604395604396/2.5604395604396\n\nThe ratio 32.4:23.3 expressed as a percentage is [calculated using the ratio to percentage calculator]:\n\n32.4:23.3\n= 139.05579399142%\n\n## Equivalent ratio tables for decimal ratios ranging 32 : 23 to 33 : 24\n\nThe table below contains links to equivalent ratio examples with ratios in increments of 0.1 in the range 32:23 to 33:24\n\n## Math Calculators\n\nYou may also find the following Math calculators useful."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.69235224,"math_prob":0.9995135,"size":4812,"snap":"2023-14-2023-23","text_gpt3_token_len":2175,"char_repetition_ratio":0.18198836,"word_repetition_ratio":0.021030495,"special_character_ratio":0.5953865,"punctuation_ratio":0.29570958,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999558,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T06:43:12Z\",\"WARC-Record-ID\":\"<urn:uuid:3c15ea2e-b182-4fda-8d53-21a452ce24ca>\",\"Content-Length\":\"36142\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29590629-93c1-462c-b37e-3874256e52c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:5ab74b79-422f-48e0-98cd-e37ab8c2a4e7>\",\"WARC-IP-Address\":\"172.66.43.65\",\"WARC-Target-URI\":\"https://math.icalculator.com/ratio-calculators/32.4:23.3.html\",\"WARC-Payload-Digest\":\"sha1:3M3QVICYHYS72M4IOMATDSS7DAT6UJVE\",\"WARC-Block-Digest\":\"sha1:IJMQE3YFQHOZXO3BK2ALFW5DQZIGVL6I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224643585.23_warc_CC-MAIN-20230528051321-20230528081321-00581.warc.gz\"}"} |
https://bodheeprep.com/fermat-theorem-application-finding-remainders | [
"Bodhee Prep-Online CAT Coaching | Online CAT Preparation | CAT Online Courses\n\n15% OFF on all CAT Courses. Discount code: BODHEE015. Valid till 7th Feb Enroll Now\n| Best Online CAT PreparationFor Enquiry CALL @ +91-95189-40261\n\n# Fermat Theorem : Application in finding remainders",
null,
"Fermat theorem states that for any two positive natural numbers N and P, if they are co-prime to each other then remainder obtained when ${N^{\\phi \\left( P \\right)}}$is divided by P is 1, where $\\phi \\left( P \\right)$ is the euler of P.\n\ni.e. $\\frac{{{N^{\\phi \\left( P \\right)}}}}{P} \\to R\\left( 1 \\right)$\n\nExample 1:Find the remainder 25^6 when is divided by 9.\n\nSolution:\n\nObserve that Euler of 9, $\\phi \\left( 9 \\right) = 6$, and 9 is co-prime to 25, hence with the direct application of Fermat theorem, the required remainder is 1.\n\n### Extension of Fermat theorem\n\nFor any two positive natural numbers N and P, if they are co-prime to each other then remainder obtained when ${N^{M \\times \\phi \\left( P \\right)}}$is divided by P is also 1, where $\\phi \\left( P \\right)$ is the euler of P and M is any positive integers.\n\nExample 2: Find the remainder when ${11^{705}}$ is divided by 17.\n\nSolution:\n\nObserve that 11 and 17 are co-prime to each other, and Euler of 17 = 16.\n\nAlso, 705 = 44×16 +1 so we can write ${11^{44 \\times 16 + 1}}$\n\nOr $\\frac{{{{11}^{705}}}}{{17}} = \\frac{{{{11}^{44 \\times 16 + 1}}}}{{17}} = \\frac{{{{11}^{44 \\times 16}} \\times 11}}{{17}}$\n\nApplying Fermat theorem, $\\frac{{{{11}^{44 \\times 16}}}}{{17}} \\to R\\left( 1 \\right)$and 11 divided by 17 the remainder obtained is 11 only.\n\nTherefore, the final remainder obtained when ${11^{705}}$ is divided by 17 is 1×11 = 11.\n\n### 4 Responses\n\n1.",
null,
"Himanshu says:\n\nVery well explained\n\n1.",
null,
"bodheeprep says:\n\n2.",
null,
"noahvishnu.2021 says:\n\nBetter notes than many other portals\n\n1.",
null,
"bodheeprep says:\n\nThanks Vishnu for this appreciation. Please recommend Bodhee Prep to others as well:)\n\n### [PDF] CAT 2021 Question Paper (slot 1, 2 & 3) with Solutions\n\nCAT 2021 question paper PDF is available on this page. The page has the CAT 2021 question paper PDFs of all the three slots. There\n\n### All About CAT Mock Test Series\n\nTable of Content for CAT Mock Tests Ideal number of CAT Mock Test Series How many CAT mocks should one write What is the right\n\n### [PDF] CAT 2020 Question Paper (slot 1,2 &3) with Solution\n\nCAT 2020 question paper threw a number of surprises. Not only was there a change in exam pattern but also the difficulty level of almost\n\n### BIG change in CAT 2020 Paper Pattern\n\nAnnounced changes in CAT 2020 Pattern As we march towards the end of the year 2020, there is another unexpected turn in the sorry saga\n\n##### CAT Online Courses\nFREE CAT prep Whatsapp Group"
] | [
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20800%20367'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2042%2042'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2042%2042'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2042%2042'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2042%2042'%3E%3C/svg%3E",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8617541,"math_prob":0.9852226,"size":1600,"snap":"2022-40-2023-06","text_gpt3_token_len":507,"char_repetition_ratio":0.12343358,"word_repetition_ratio":0.19157088,"special_character_ratio":0.376875,"punctuation_ratio":0.093457945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99672633,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T01:39:08Z\",\"WARC-Record-ID\":\"<urn:uuid:53147a08-933a-46ef-8557-83a82095ec48>\",\"Content-Length\":\"152657\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9fe0ab73-4aa0-4610-a078-6cb891fddd6b>\",\"WARC-Concurrent-To\":\"<urn:uuid:645c4781-50e2-4370-adea-d37c2f677632>\",\"WARC-IP-Address\":\"172.67.194.72\",\"WARC-Target-URI\":\"https://bodheeprep.com/fermat-theorem-application-finding-remainders\",\"WARC-Payload-Digest\":\"sha1:AX6L4TGPXZLVKI5MIN7YKFMISCKF7JPQ\",\"WARC-Block-Digest\":\"sha1:ZHPXWKXTJAW3RU4KTQSJJMNOFXDW3AQL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500158.5_warc_CC-MAIN-20230205000727-20230205030727-00274.warc.gz\"}"} |
https://www.extrica.com/article/16656 | [
"Published: 30 June 2016\n\n# Numerical computation of the flow noise for the centrifugal pump with considering the impeller outlet width\n\nHeng-xuan Luan1\nQing-guang Chen2\nLi-yuan Weng3\nYuan-zhong Luan4\nJie Li5\n1, 2College of Mechanical and Electronic Engineering, Shandong University of Science and Technology, Qingdao, China\n3, 4College of Geomatics, Shandong University of Science and Technology, Qingdao, China\n5North China University of Science and Technology, Tangshan, China\nCorresponding Author:\nHeng-xuan Luan\nViews 149\n\n#### Abstract\n\nIn order to study the effects of the impeller outlet width on the flow noise of the centrifugal pump, a centrifugal pump is applied in the paper as the research object. Geometric parameters of the pump and impeller are constant, and BEM (Boundary Element Method) and experimental method are adopted to analyze noises when the impeller outlet width is 8 mm, 10 mm, 12 mm and 14 mm, respectively. Firstly, Large-eddy simulation method is applied to compute the transient flow field of the centrifugal pump. Larger pressures and flow velocity of the centrifugal pump mainly are at the edge of the impeller. When the fluid flows from the centrifugal pump, there are two obvious separation vortexes at the outlet of the centrifugal pump. The flow velocity distribution of the centrifugal pump in the horizontal plane is basically symmetric. Based on modal analysis and the transient flow field of the pump, BEM is adopted to compute the noise in the centrifugal pump caused by the unsteady flow, and experiments are also conducted for verification. Based on the above analysis, the noise in the interior and exterior field of the centrifugal pump is computed, and the effects of the impeller outlet width on the noise of the centrifugal pump are then studied. As shown from the result, the radiation sound power at the characteristic frequency increases with the increase of the impeller outlet width. With a reasonable range, the impeller outlet width makes the sound pressure level (SPL) in the interior and external field of various flow conditions be smaller. Considering the energy performance and flow field noises of the centrifugal pump, the pump has the optimal comprehensive performance at the impeller outlet width of 10 mm. The research results can be applied to provide a reference for the optimization design of the centrifugal pump with low vibration and noise.\n\n## 1. Introduction\n\nAt present, the centrifugal pump has become a main technique for recycling liquid energy and is widely used in the important field including petroleum, chemical engineering and so on [1-4]. To improve the recovery rate of energy, the centrifugal pump is gradually developing towards the direction of large power. However, flow-induced noises are one of key problems affecting its operation.\n\nThis paper firstly adopts large-eddy simulation to compute the flow field characteristics of the centrifugal pump under four flow conditions. Then, the flow field result of the centrifugal pump and the mesh model of BEM are imported into Virtual.Lab to conduct on coupling computation and obtain the internal and external sound field of the centrifugal pump. Finally, the computational result is compared with that of the experiment for verification.\n\n## 2. Geometric model and mesh of the centrifugal pump\n\nThe computational model is a single-stage and single-suction centrifugal pump. As shown in Fig. 1, design parameters are as follows: flow $Q=$50 m3/h, pump head $H=$34 m, rotating speed $n=$2900 r/min, inlet diameter ${D}_{1}=$75 mm, outlet diameter ${D}_{2}=$174 mm, impeller outlet width $b=$10 mm, number of blades $Z=$6, volute base diameter ${D}_{3}=$184 mm, and volute base width ${b}_{1}=$20 mm. When the numerical computation is conducted, the centrifugal pump should lengthen the outlet to ensure high computation accuracy. When the computational domain is divided into meshes, hexahedral meshes are adopted with considering the complex shape of the model. Meanwhile, boundary layer meshes are adopted. The number of boundary layers is 5. Growth rate is 1.1. Total thickness is 0.0017 mm. Finally, the number of meshes is 1361334, meshes in inlet section are 115058, meshes in impellers are 582673 and meshes in outlet section are 135185. The mesh model of the computational domain and the impeller is shown in Fig. 2.\n\nFig. 1Geometric model of the centrifugal pump",
null,
"a) Primary structure",
null,
"b) Computational domain",
null,
"c) Impeller\n\nFig. 2Mesh model of the centrifugal pump",
null,
"a) Computational domain",
null,
"b) Impeller",
null,
"## 3. Computation of flow field\n\n### 3.1. Control equation\n\n3D flow field in the centrifugal pump is solved by CFD, and turbulent effect is simulated based on LES method. The filter function is adopted to filter N-S equation, thus obtaining LES control equation [15-17]. The incompressible continuity equation and N-S equation are as follows:\n\n1\n$\\frac{\\partial {\\overline{u}}_{i}}{\\partial {x}_{i}}=0,$\n2\n$\\frac{\\partial {\\overline{u}}_{i}}{\\partial t}+\\frac{\\left(\\partial {\\overline{u}}_{i}{\\overline{u}}_{j}\\right)}{\\partial {x}_{j}}=-\\frac{1}{\\rho }\\frac{\\partial p}{\\partial {x}_{i}}+\\frac{\\partial }{\\partial {x}_{j}}\\left(\\nu \\frac{\\partial {\\overline{u}}_{i}{\\overline{u}}_{j}}{\\partial {x}_{j}}\\right)-\\frac{\\partial {\\tau }_{ij}}{\\partial {x}_{j}}+{B}_{i},$\n\nwherein, ${u}_{i}$ ($i=$1, 2, 3) represents the velocity component in parallel to the coordinate axis ${x}_{i}$. $t$ is time. $\\rho$ is the density. $p$ is the intensity of pressure. $\\nu$ is the motion viscosity. ${\\tau }_{ij}$ is the sub-grid scale stress tensor. ${B}_{i}$ is the source term generated from Coriolis force.\n\nThrough the eddy viscosity ${\\nu }_{sgs}$, the relationship between the sub-grid scale stress tensor ${\\tau }_{ij}$ and large-scale strain rate tensor is established as follows:\n\n3\n${\\tau }_{ij}-\\frac{1}{3}{\\delta }_{ij}{\\tau }_{kk}=-2{\\nu }_{sgs}{\\overline{S}}_{ij},$\n4\n${\\overline{S}}_{ij}=\\frac{1}{2}\\left(\\frac{\\partial {u}_{i}}{\\partial {x}_{j}}+\\frac{\\partial {u}_{j}}{\\partial {x}_{i}}\\right).$\n\nIn the study, the wall-adapting local eddy-viscosity (WALE) model is applied to solve the eddy viscosity. A dimensionless coefficient ${C}_{w}$ is contained in this model, which is defined as follows:\n\n5\n${\\nu }_{sgs}=\\left({C}_{w}\\mathrm{\\Delta }{\\right)}^{2}\\frac{\\left({S}_{ij}^{d}{S}_{ij}^{d}{\\right)}^{3/2}}{\\left({\\overline{S}}_{ij}{\\overline{S}}_{ij}{\\right)}^{5/2}+\\left({S}_{ij}^{d}{S}_{ij}^{d}{\\right)}^{5/4}},$\n6\n${S}_{ij}^{d}=\\frac{1}{2}\\left({\\overline{g}}_{ij}^{2}+{\\overline{g}}_{ji}^{2}\\right)-\\frac{1}{3}{\\delta }_{ij}{\\overline{g}}_{kk}^{2},$\n7\n${\\overline{g}}_{ij}=\\frac{\\partial {\\overline{u}}_{i}}{\\partial {x}_{j}},$\n\nwherein, $\\mathrm{\\Delta }$ is the filter length, depending on the grid volume ($\\mathrm{\\Delta }={\\left(\\mathrm{\\Delta }x\\mathrm{\\Delta }y\\mathrm{\\Delta }z\\right)}^{1/3}$), and model coefficient ${C}_{w}$ is chosen as 0.1.\n\n### 3.2. Computational boundary conditions and results\n\nThe inlet of the centrifugal pump is set as static pressure and outlet is set as mass. In the computational domain, all surfaces adopt the condition of no-slip wall and roughness is set according to the actual accuracy. Turbulence model adopts $k$-$\\epsilon$ model. The value of ${y}^{+}$ is about 1. Scalable wall function is selected. Computational accuracy is 0.00001. To clearly distinguish the unsteady information of the internal flow field, time step is set as 1.1×10-4 s. Namely, the impeller in every time step rotates about 1°. After the flow field presents a stable periodical change, the information of pressure pulsation of the impeller and shell surface starts to be exported. Finally, the pressure and velocity distribution of the centrifugal pump are shown in Fig. 3 and Fig. 4 respectively. As shown in Fig. 3, larger pressure of the centrifugal pump mainly appears at the edge position of the impeller. Larger flow velocity and impact force are produced when fluid flows into the centrifugal pump and bears the inertia effect of high-speed revolution of the centrifugal pump, which leads to larger pressure in volute of the centrifugal pump. Fig. 4 shows the distribution of flow velocity at the impeller of the centrifugal pump, from which it can be seen that flow velocity at the center of the impeller is relatively small and flow velocity at the edge of the impeller is relatively large.\n\nFig. 3Contour of pressure distribution of the centrifugal pump",
null,
"a) Centrifugal pump",
null,
"b) Impeller",
null,
"Fig. 4Contour of flow velocity distribution of the impeller",
null,
"a) Impeller",
null,
"The flow velocity distribution of the centrifugal pump is extracted and shown in Fig. 5. As can be seen from the figure, fluid will be thrown away from the outlet under the action force of high-speed revolution of the centrifugal pump when fluid flows into the internal of the centrifugal pump from the inlet. In addition, there are two obvious separation vortexes in the outlet. The flow velocity distribution of the centrifugal pump in the horizontal plane is basically symmetric.\n\nFig. 5Flow velocity distribution of the centrifugal pump",
null,
"a) Vertical plane",
null,
"b) Horizontal plane\n\n## 4. Computation and experimental verification of flow noises of the centrifugal pump\n\n### 4.1. Numerical calculation model of flow noises of the centrifugal pump\n\nThe noise of the centrifugal pump mainly contains mechanical vibration noises and flow-induced noises. Compared with flow noises, mechanical noises are relatively low. Therefore, it can be neglected. The generation and propagation of flow-induced noises are shown in Fig. 6. Flow-induced noises mainly come from pressure fluctuation within the centrifugal pump. When the pressure fluctuation acts on the inner surface of volute and pump cover, excitation force is generated, which causes the vibration of the centrifugal pump cover and radiates noises.\n\nFig. 6Generation and propagation of fluid-induced noises",
null,
"Numerical computation of flow noises of the centrifugal pump adopts the method as shown in Fig. 7 and makes the coupling between flow field characteristics and structural meshes. Firstly, large-eddy simulation is applied to obtain the distribution of flow field characteristics of the centrifugal pump. Then, the natural modal of cover structure of the centrifugal pump is solved. Finally, the natural modal and flow field result are imported into Virtual.Lab. Meanwhile, ID of meshes and nodes are checked to avoid conflict. The result of flow field and structural modal is mapped to boundary element model so that boundary element model will obtain all characteristics of flow field and structural modal and realize vibro-acoustic coupling computation. When the boundary element model of the centrifugal pump is obtained, it is necessary to ensure each wave length of sound contains 6 elements at least. Otherwise, the computational result will be difficult to guarantee. The final boundary element model of the centrifugal pump is shown in Fig. 8(a). The model contains 1209 elements and 1431 nodes. A field point mesh with the radius of 1m is established outside the centrifugal pump to receive the radiation noise of the centrifugal pump, as shown in Fig. 8(b).\n\nFig. 7The computational process of vibro-acoustic coupling",
null,
"Fig. 8Boundary element model of the centrifugal pump",
null,
"a) Boundary element model",
null,
"b) Field point mesh\n\n### 4.2. Experimental verification of flow noises of the centrifugal pump\n\nThe measurement of external filed noises of the centrifugal pump is interfered by the motor noise, pipeline noises and background noises, resulting in a certain difficulty in the measurement accuracy. In order to verify the reliability of the numerical computation method, the noise signal within the centrifugal pump is measured by a ST70 hydrophone, whose frequency range is 50 kHz-70 kHz and sound pressure sensitivity of the receiver is –204 dB. Through the flush installation, a hydrophone is mounted directly on the wall. The sensor probe [18-21] is flush with the wall surface around measurement points, thus measuring the fluid noise within the tube directly. And the measuring point of the hydrophone is located at the model pump outlet with 4 times of the pipe diameter.\n\nThe centrifugal energy performance curves at different impeller outlet widths are obtained experimentally as shown in Fig. 9. It can be found that the pump head increases with the increase of the impeller outlet width, since the axial flow velocity is further reduced by the increasing width, thus improving the theoretical head of the pump. With the increase of impeller outlet width, the efficiency curve has larger range of high efficiency area. The efficiency has been improved and the maximum efficiency point shifts to the large flow. In addition, pump head and efficiency have a large change range when impeller outlet width increases from 8 mm to 10 mm. Pump head and efficiency have a decreased change range when impeller outlet width continues to increase.\n\nFig. 9Energy performance curve",
null,
"",
null,
"BEM is adopted to compute the internal sound field and external radiation noises of the centrifugal pump. The medium is water, density is 1000 kg/m3, acoustic wave propagation velocity is 1500 m/s, and reference sound pressure is 1×10-6 Pa. The experimental and computational results of the flow-induced noise regarding the model pump with the impeller outlet width of 10 mm are compared as shown in Fig. 10. At 1000 Hz in the figure, the experimental value is 138 dB, and BEM value is 132 dB, with the error of 4.3 %. Therefore, the experimental value has a good consistency with the computational value. The contour of sound pressure within the centrifugal pump and the contour of sound field outside the centrifugal pump are extracted and shown in Fig. 11. As shown in Fig. 11(a), the inner impeller of the centrifugal pump has relatively large sound pressure at the edge and smaller sound pressure at the center, which is consistent with the flow field distribution in Fig. 3 and Fig. 4. The impeller of the centrifugal pump has larger flow velocity and pressure fluctuation at the edge, resulting in relatively serious radiation noises. As shown in Fig. 11(b), the largest far-field radiation noise of the centrifugal pump is at the position directly facing the outlet of the centrifugal pump and radiation noises in other places are relatively weak.\n\nFig. 10Comparison of sound pressure level between simulation and experiment",
null,
"## 5. Influence of impeller outlet width on the sound field of the centrifugal pump\n\nFrom the above analysis, it can be seen that BEM can effectively predict the internal and external sound field of the centrifugal pump. BEM is adopted to compute the radiation noise of the centrifugal pump under different impeller outlet widths and flows. SPL at 500 Hz is exacted and shown in Table 1.\n\nAs can be seen from Table 1, in 1.0${Q}_{opt}$, 1.2${Q}_{opt}$ and 1.4${Q}_{opt}$, the radiation sound power of the centrifugal pump at 500 Hz is increased with the increasing impeller outlet width $b$. and the acoustic power increase firstly and then decrease with the increasing impeller outlet In 0.8${Q}_{opt}$, which may be because CFD cannot accurately compute the volute force due to the complex flow field within the small flow condition. In general, the sound power of the centrifugal increases with the increase of the impeller outlet width $b$. The model pump with the impeller outlet width of 12 mm has much greater sound power in 1.0${Q}_{opt}$ than in 0.8${Q}_{opt}$, 1.2${Q}_{opt}$and 1.4${Q}_{opt}$, while the pump with the width of 8 mm has little difference in terms of the sound power under three conditions. Just in view of sound power values under each condition, the pump with the width of 8 mm is most optimal.\n\nFig. 11Internal and external sound field of the centrifugal pump",
null,
"a) Contour of internal sound pressure",
null,
"b) Contour of external radiation noises\n\nTable 1Sound power at characteristic frequency for different impeller outlet widths\n\n Impeller outlet width / mm Working conditions 0.8${Q}_{opt}$ 1.0${Q}_{opt}$ 1.2${Q}_{opt}$ 1.4${Q}_{opt}$ 8 29.75 26.373 42.315 49.33 10 76.933 132.32 56.439 61.21 12 57.56 167.395 62.1 73.52 14 55.31 171.32 63.42 80.13\n\nA clear directivity is in the propagation of sound. Corresponding to the sound source, different spatial points have different positions and directions, so the noise spectrums are also different. In order to obtain the circumferential distribution of the pump SPL, 36 monitor points are set at the out of the cover, which are 1000 mm away from the impeller rotation axis. The angle between any two monitor points is 10°. Noise directivity distribution of monitor points at characteristic frequency for different impeller outlet widths is shown in Fig. 12.\n\nAs shown in Fig. 12(a), in 0.8${Q}_{opt}$, SPL increases by 25 % with the increase of impeller outlet width from 8 mm to 10 mm; SPL has a larger change range with the increase of impeller outlet width from 10 mm to 12 mm, and increases 35 % compared with that when impeller outlet width is 10 mm; SPL increases by 17 % when impeller outlet width continues to increase to 14 mm. In ${Q}_{opt}$ as shown in Fig. 12(b), SPL increases by 20 % with the increase of impeller outlet width from 8 mm to 10 mm; when impeller outlet width increases from 10 mm to 12 mm, SPL increases by about 25 %; when impeller outlet width continues to increase to 14 mm, noise SPL increases by 13 %. In 1.2${Q}_{opt}$ as shown in Fig. 12(c), SPL increases by 18 % with the increase of impeller outlet width from 8 mm to 10 mm; SPL increases by 20 % with the increase of impeller outlet width from 10 mm to 12 mm; SPL increases by 11 % with the increase of impeller outlet width from 12 mm to 14 mm. In 1.41${Q}_{opt}$ as shown in Fig. 12(d), SPL increases by 14 % with the increase of impeller outlet width from 8 mm to 10 mm; SPL increases by 22 % with the increase of impeller outlet width from 10 mm to 12 mm; SPL increases by 10 % with the increase of impeller outlet width from 12 mm to 14 mm. As shown from the above analysis, SPL increases progressively with the increase of impeller outlet width when impeller outlet width is less than 12 mm. However, SPL decreases progressively with the increase of impeller outlet width when impeller outlet width is more than 12 mm. On one hand, the secondary flow vortex on the volute can be eliminated by the increase of the impeller outlet width, but reflux can be caused by the excessive impeller outlet width, thus increasing the static and dynamic interference effect of the impeller and volute. On the other hand, the impeller outlet width plays a significant impact on the efflux/wake phenomenon at the impeller outlet, and the wake area also enlarges with the increase of the outlet width [22-24]. Therefore, impeller outlet width has a certain range with the increase of the impeller outlet width, the pressure pulsation generated from the static and dynamic interference within the pump also strengthens, resulting in the increased noises.\n\nFig. 12Noise directivity distribution of monitor points under different impeller outlet width",
null,
"a) 0.8${Q}_{opt}$",
null,
"b) 1.0${Q}_{opt}$",
null,
"c) 1.2${Q}_{opt}$",
null,
"d) 1.4${Q}_{opt}$\n\nCompared to Fig. 9, it can be seen that: the pump energy performance is optimal and pump noise is maximum at the impeller outlet width of 14 mm, while the pump noise SPL is minimum and energy performance is poorer at the width of 14 mm. The pump energy performance is of little difference at the width of 10 mm and 14 mm, but the pump SPL is significantly smaller at the width of 10 mm. Compared with that at the width of 8 mm and 10 mm, the latter noise has smaller increasing amplitude but better energy performance than the former. As shown from the above analysis, a suitable range is presented in the impeller outlet width, so as to not only ensure the utility requirements of pump performance, but make the external field noise SPL be smaller under each flow condition. Regarding four impellers, it can be drawn from the comprehensive consideration of the centrifugal pump performance and noises that the centrifugal pump has an optimal comprehensive performance at the impeller outlet width of 10 mm.\n\nWhen flow is 1.0${Q}_{opt}$, the contours of far-field radiation noises and surface sound pressure of the centrifugal pump under different impeller widths are exacted and shown in Fig. 13 and Fig. 14, respectively. As shown in Fig. 13, the far-field radiation noise of the centrifugal pump increases to some degree with the increase of impeller outlet width, especially at the outlet of the centrifugal pump. As can be seen from Fig. 14, relatively large noise SPL always appears at the edge of the impeller when the outlet width of the centrifugal pump is from 8 mm to 12 mm, while the largest noise is at the center of the impeller when impeller outlet width is 14 mm. In addition, the contour of noises within the centrifugal pump does not show significant changes when impeller outlet width increases from 8 mm to 10 mm. With the increase of impeller outlet width from 10 mm to 12 mm, noises at the edge and outer surface of the impeller increase slightly. When impeller outlet width increases to 14 mm, noise distribution presents significant changes. The maximum noise SPL has shifted from the edge of the impeller to its center. From the above analysis, it shows that impeller outlet width has a serious impact on the radiation noise in the external field of the centrifugal pump and the internal noise of the centrifugal pump. Therefore, it is necessary to comprehensively consider many factors and confirm an optimal impeller outlet width when the centrifugal pump is designed.\n\nFig. 13Contour of far-field radiation noises of the centrifugal pump",
null,
"a) Outlet width = 8 mm",
null,
"b) Outlet width = 10 mm",
null,
"c) Outlet width = 12 mm",
null,
"d) Outlet width = 14 mm\n\nFig. 14Contour of surface sound pressure of the centrifugal pump",
null,
"a) Outlet width = 8 mm",
null,
"b) Outlet width = 10 mm",
null,
"c) Outlet width = 12 mm",
null,
"d) Outlet width = 14 mm\n\n## 6. Conclusions\n\nLES is combined with BEM to compute and analyze the influence of impeller outlet width on the interior and external noise of the centrifugal pump, and the following conclusions can be obtained.\n\n1) Larger pressure and flow velocity of the centrifugal pump mainly are at the edge of the impeller. When the fluid flows from the centrifugal pump, there are two obvious separation vortexes at the outlet of the centrifugal pump. The flow velocity distribution of the centrifugal pump in the horizontal plane is basically symmetric.\n\n2) BEM is adopted to numerically compute the noise of the centrifugal pump and compared with the experimental result. According to results, their change trends are basically same and the maximum error is only 4.3 %, indicating that the computational model of flow noises of the centrifugal pump in this paper is reliable.\n\n3) Under the same flow condition, SPL of the centrifugal pump at the characteristic frequency increases with the increase of impeller outlet width. Under the same outlet width, SPL of the centrifugal pump under ${Q}_{opt}$ is much greater than that under other flow conditions.\n\n4) Impeller outlet width of the centrifugal pump has a serious impact on the sound field of the centrifugal pump. However, impeller outlet width of the centrifugal pump has a suitable value, which leads to smaller far-field radiation noise and surface SPL under various flow conditions. The centrifugal pump has the optimal comprehensive performance when impeller outlet width of the centrifugal pump studied in this paper is 10 mm.\n\n20 November 2015\nAccepted\n01 June 2016\nPublished\n30 June 2016\nSUBJECTS\nAcoustics, noise control and engineering applications\nKeywords\ncentrifugal pump\nnoises\nimpeller outlet width\nBEM\nAuthor Contributions\n\nHengxuan Luan conceived the work that led to the submission, acquired data, played an important role in interpreting the results. Qingguang Chen revised the manuscript. Liyuan Weng performed the experiments. Yuanzhong Luan contributed materials tools and analysis tools. Jie Li designed the experiments, analyzed the data."
] | [
null,
"https://static-01.extrica.com/articles/16656/16656-img1.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img2.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img3.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img4.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img5.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img6.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img7.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img8.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img9.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img10.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img11.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img12.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img13.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img14.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img15.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img16.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img17.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img18.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img19.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img20.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img21.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img22.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img23.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img24.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img25.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img26.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img27.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img28.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img29.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img30.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img31.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img32.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img33.jpg",
null,
"https://static-01.extrica.com/articles/16656/16656-img34.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87157017,"math_prob":0.9727547,"size":28799,"snap":"2023-40-2023-50","text_gpt3_token_len":6572,"char_repetition_ratio":0.23299184,"word_repetition_ratio":0.1571489,"special_character_ratio":0.22736901,"punctuation_ratio":0.14095272,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96808344,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T15:25:51Z\",\"WARC-Record-ID\":\"<urn:uuid:70ab824e-24ed-460b-a011-ed7df42b3de3>\",\"Content-Length\":\"166978\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2c47e2eb-0faf-4d3e-982e-f12f5a71fd82>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b88b4b2-d6f7-499d-9e4c-b84d63413426>\",\"WARC-IP-Address\":\"20.50.64.3\",\"WARC-Target-URI\":\"https://www.extrica.com/article/16656\",\"WARC-Payload-Digest\":\"sha1:XSWB6FW26YBRNOONJ73ZBN4XXU7CSD5Y\",\"WARC-Block-Digest\":\"sha1:ZVY56WX6DBXRV4TKCJ3PR3BJFKO6LLJG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100531.77_warc_CC-MAIN-20231204151108-20231204181108-00184.warc.gz\"}"} |
https://malariajournal.biomedcentral.com/articles/10.1186/s12936-015-0966-y | [
"# Bias in logistic regression due to imperfect diagnostic test results and practical correction approaches\n\n## Abstract\n\n### Background\n\nLogistic regression is a statistical model widely used in cross-sectional and cohort studies to identify and quantify the effects of potential disease risk factors. However, the impact of imperfect tests on adjusted odds ratios (and thus on the identification of risk factors) is under-appreciated. The purpose of this article is to draw attention to the problem associated with modelling imperfect diagnostic tests, and propose simple Bayesian models to adequately address this issue.\n\n### Methods\n\nA systematic literature review was conducted to determine the proportion of malaria studies that appropriately accounted for false-negatives/false-positives in a logistic regression setting. Inference from the standard logistic regression was also compared with that from three proposed Bayesian models using simulations and malaria data from the western Brazilian Amazon.\n\n### Results\n\nA systematic literature review suggests that malaria epidemiologists are largely unaware of the problem of using logistic regression to model imperfect diagnostic test results. Simulation results reveal that statistical inference can be substantially improved when using the proposed Bayesian models versus the standard logistic regression. Finally, analysis of original malaria data with one of the proposed Bayesian models reveals that microscopy sensitivity is strongly influenced by how long people have lived in the study region, and an important risk factor (i.e., participation in forest extractivism) is identified that would have been missed by standard logistic regression.\n\n### Conclusion\n\nGiven the numerous diagnostic methods employed by malaria researchers and the ubiquitous use of logistic regression to model the results of these diagnostic tests, this paper provides critical guidelines to improve data analysis practice in the presence of misclassification error. Easy-to-use code that can be readily adapted to WinBUGS is provided, enabling straightforward implementation of the proposed Bayesian models.\n\n## Background\n\nEpidemiologists use logistic regression to identify risk factors (or protective factors) based on binary outcomes from diagnostic tests. As a consequence, this statistical model is used ubiquitously in studies conducted around the world, encompassing a wide range of diseases. One issue with this tool, however, is that it fails to account for imperfect diagnostic test results (i.e., misclassification errors). In other words, depending on the diagnostic method employed, a negative test might be incorrectly interpreted as lack of infection (i.e., false-negative) and/or a positive test result might be incorrectly interpreted as infection presence (i.e., false-positive) . This is particularly relevant for malaria given the numerous diagnostic techniques that are commonly employed [e.g., rapid diagnostic tests (RDTs), fever, anaemia, microscopy, and polymerase chain reaction (PCR)].\n\nImperfect detection has important implications. For instance, the determination of infection prevalence (i.e., the proportion of infected individuals) will be biased if detection errors are ignored . However, it is typically under-appreciated that errors in detection may also influence the identification of risk factors and estimates of their effect. An important study by Neuhaus demonstrated that as long as covariates do not influence sensitivity and/or specificity (e.g., non-differential outcome misclassification), then imperfect detection is expected to result in adjusted odds ratios that are artificially closer to zero and underestimation of uncertainty in parameter estimates (see also ). However, when sensitivity and specificity are influenced by covariates, the direction of the bias in parameter estimates is difficult to predict [12, 14].\n\nSeveral methods have been proposed in the literature to adjust for misclassification of outcomes, including an expectation–maximization (EM) algorithm , the explicit acknowledgement of misclassification in the specification of the likelihood, enabling users to fit the model using SAS code , probabilistic sensitivity analysis and Bayesian approaches . Unfortunately, these methods have not been widely adopted by the malaria epidemiology community, likely because these problems are rarely acknowledged outside biostatistics and statistically inclined epidemiologists. Lack of awareness is particularly problematic because several of the proposed modelling approaches that address this problem work best if an ‘internal validation sample’ is collected alongside the main data.\n\nThis article begins with a brief literature review to demonstrate how malaria epidemiologists are generally unaware of the problem associated with, and the proposed methods to deal with, misclassification error. Then, different types of auxiliary data and the associated statistical models that can be used to appropriately address this problem are described and straightforward code is provided to readily implement these models. Finally, performance of these models is illustrated using simulations and a case study on malaria in a rural settlement of the western Brazilian Amazon.\n\n## Methods\n\n### Systematic literature review\n\nTo provide support for the claim that malaria epidemiologists generally do not modify their logistic regressions to account for imperfect diagnostic test outcomes, a targeted literature review was conducted. PubMed was searched using different combinations of the search terms ‘malaria’, ‘logistic’, ‘models’, ‘regression’, ‘diagnosis’, and ‘diagnostic’. The search was restricted to studies published between January 2005 and April 2015. Of the 209 search results, 173 articles were excluded because they included authors from this article, were unrelated to malaria, malarial status was either unreported or not the outcome variable in the logistic regression, and/or they relied solely on microscopy. Studies that relied only on microscopy were excluded because this diagnostic method is considered the gold standard in much of the world, with the important exception of locations with relatively low transmission (e.g., Latin America), where PCR is typically considered to be the gold standard method. Detailed information regarding the literature review (e.g., list of articles with the associated reasons for exclusion) is available upon request.\n\n### Statistical models and auxiliary data to address misclassification error\n\nTo avoid the problem associated with imperfect detection when using logistic regression, one obvious solution is to use a highly sensitive and specific diagnostic test (e.g., the gold standard method) to determine disease status for all individuals. Unfortunately, this is often unfeasible and/or not scalable because of cost or other method requirements (e.g., electricity, laboratory equipment, expertise availability, or time required). Alternatively, statistical methods that specifically address the problem of imperfect detection (i.e., misclassification) can be adopted. Unfortunately, these statistical models contain parameters that cannot be estimated from data collected in regular cross-sectional surveys or cohort studies based on a single diagnostic test. Therefore, these statistical methods are described in detail along with the additional data that are required to fit them.\n\nFor all models, JAGS code is provided for readers interested in implementing and potentially modifying these models (see Additional Files 1, 2, 3, and 4 for details). Readers should have no problem adapting the same code to WinBUGS/OpenBUGS, if desired. The benefit of using Bayesian models is that they can be readily extended to account for additional complexities (e.g., random effects to account for sampling design). As a result, the code provided here is useful not only for users interested in this paper’s Bayesian models but also as a stepping stone for more advanced models.\n\n### Bayesian model 1\n\nOne option is to use results from an external study on the sensitivity and specificity of the diagnostic method employed. Say that this external study employed the same diagnostic method, together with the gold standard method, and reported the estimated sensitivity $$\\widehat{SN}$$ and specificity $$\\widehat{SP}$$. This information can be used to properly account for imperfect detection. More specifically, Bayesian model 1 assumes that\n\n$$I_{i} \\sim Bernoulli\\left( {\\frac{{{ \\exp }\\left( {\\beta_{0} + \\beta_{1} x_{i1} + \\beta_{2} x_{i2} + \\cdots } \\right)}}{{1 + { \\exp }\\left( {\\beta_{0} + \\beta_{1} x_{i1} + \\beta_{2} x_{i2} + \\cdots } \\right)}}} \\right)$$\n\nwhere $$I_{i}$$ is the infection status of the ith individual, $$\\beta_{0} ,\\beta_{1} ,\\beta_{2} , \\ldots$$ are regression parameters, and $${\\text{x}}_{{{\\text{i}}1}} ,x_{i2} , \\ldots$$ are covariates. It further assumes that:\n\n$$D_{i} \\sim Bernoulli(\\widehat{SN}) \\quad {\\text{if}}\\;\\;I_{i} = 1$$\n$$D_{i} \\sim Bernoulli\\left( {1 - \\widehat{SP}} \\right) \\quad {\\text{if}}\\;I_{i} = 0$$\n\nwhere $$D_{i}$$ is the regular diagnostic test result for the ith individual. Finally, different priors can be assigned for the disease regression parameters. A fairly standard uninformative prior is adopted for these parameters, given by:\n\n$$\\beta_{j} \\sim N(0,10).$$\n\nOne problem with this approach, however, is that it assumes that these diagnostic test parameters are exactly equal to their estimates $$\\widehat{SN}$$ and $$\\widehat{SP}$$. A better approach would account for uncertainty around these estimates of sensitivity and specificity, as described in Bayesian model 2.\n\n### Bayesian model 2\n\nThis model is very similar to Bayesian model 1, except that it employs informative priors for sensitivity SN and specificity SP. One way to create these priors is to use the following information from the external study:\n\n• $$N_{ + }$$ number of infected individuals, as assessed using the gold standard method;\n\n• $$T_{ + }$$ number of individuals detected to be infected by the regular diagnostic method among all $$N_{ + }$$ individuals;\n\n• $$N_{ - }$$ number of healthy individuals, as assessed using the gold standard method; and\n\n• $$T_{ - }$$ number of individuals not detected to be infected by the regular diagnostic method among all $$N_{ - }$$ individuals.\n\nFollowing the ideas in [19, 20], these ‘data’ can be used to devise informative priors of the form:\n\n$$SN\\sim Beta\\left( {T_{ + } + 1,N_{ + } - T_{ + } + 1} \\right)$$\n$$SP\\sim Beta\\left( {T_{ - } + 1,N_{ - } - T_{ - } + 1} \\right).$$\n\nThere are other ways of creating informative priors for SN and SP that do not rely on these four numbers (i.e., $$T_{ - } ,T_{ + } ,N_{ - } ,N_{ + }$$) (e.g., based on estimates of SN and SP with confidence intervals from a meta-analysis) but the method proposed above is likely to be broadly applicable given the abundance of studies that report these four numbers.\n\nTwo potential problems arise when using external data to estimate SN and SP. First, results from the external study are assumed to aptly apply to the study in question (i.e., ‘transportability’ assumption), which may not necessarily be the case if diagnostic procedures and storage conditions of diagnostic tests are substantially different. Second, the performance of the diagnostic test may depend on covariates (i.e., differential misclassification) . For instance, microscopy performance for malaria strongly depends on parasite density . If age is an important determinant of parasite density in malaria (i.e., older individuals are more likely to display lower parasitaemia), then microscopy sensitivity might be higher for younger children than for older children or adults. Another example refers to diagnostic methods that rely on the detection of antibodies. For these methods, sensitivity might be lower for people with compromised immune systems (e.g., malnourished children). In these cases, adopting a single value of SN and SP in Bayesian model 1 or 2 might be overly simplistic and may lead to even greater biases in parameter estimates. Bayesian model 3 solves these two problems associated with using external data.\n\n### Bayesian model 3\n\nInstead of relying on external sources of information, another alternative is to collect additional information on the study participants themselves (also known as an internal validation sample ). More specifically, due to its higher cost, one might choose to diagnose only a small sub-set of individuals using the gold standard method. This sample enables the estimation of SN and SP of the regular diagnostic test (and potentially reveals how these test performance characteristics are impacted by covariates) without requiring the ‘transportability’ assumption associated with using external data.\n\nIn Bayesian model 3, the gold standard method is assumed to be employed concurrently with the regular diagnostic method for a randomly chosen sub-set of individuals. Its structure closely follows that of Bayesian models 1 and 2, except that now sensitivity and specificity are allowed to vary according to covariates:\n\n$$D_{i} \\sim Bernoulli\\left( {SN_{i} = \\frac{{{ \\exp }\\left( {\\alpha_{0} + \\alpha_{1} x_{i1} + \\alpha_{2} x_{i2} + \\cdots } \\right)}}{{1 + { \\exp }\\left( {\\alpha_{0} + \\alpha_{1} x_{i1} + \\alpha_{2} x_{i2} + \\cdots } \\right)}}} \\right)\\;{\\text{if}}\\;I_{i} = 1$$\n$$D_{i} \\sim Bernoulli\\left( {1 - SP_{i} = \\frac{1}{{1 + { \\exp }\\left( {\\omega_{0} + \\omega_{1} x_{i1} + \\omega_{2} x_{i2} + \\cdots } \\right)}}} \\right)\\;{\\text{if}}\\;I_{i} = 0$$\n\nwhere additional regression parameters ($$\\alpha_{0} ,\\alpha_{1} ,\\alpha_{2} , \\ldots$$ and $$\\omega_{0} ,\\omega_{1} ,\\omega_{2} , \\ldots$$) determine how sensitivity and specificity, respectively, vary from individual to individual as a function of the observed covariates. Notice that the covariates in these sensitivity and specificity sub-models do not need to be the same as those used to model infection status $$I_{i}$$. Also notice that it is only feasible to estimate all these regression parameters because of the assumption that infection status $$I_{i}$$ is known for a sub-set of individuals tested with the gold standard method. More specifically, it is assumed that $$I_{i} = G_{i}$$ for these individuals, where $$G_{i}$$ is the result from the gold standard method. A summary of the different types of data discussed above and the corresponding statistical models is provided in Table 1.\n\n### Simulations\n\nThe effectiveness of the proposed Bayesian models in estimating the regression parameters was assessed using simulations. One hundred datasets were created for each combination of sensitivity (SN = 0.6 or SN = 0.9) and specificity (SP = 0.9 or SP = 0.98). Sensitivity and specificity values were chosen to encompass a wide spectrum of performance characteristics of diagnostic methods. Furthermore, it is assumed that sensitivity and specificity do not change as a function of covariates. Each dataset consisted of diagnostic test results for 2000 individuals, with four covariates standardized to have mean zero and standard deviation of one. In these simulations, infection prevalence when covariates were zero (i.e., $$\\frac{{\\exp \\left( {\\beta_{0} } \\right)}}{{1 + \\exp \\left( {\\beta_{0} } \\right)}}$$) was randomly chosen to vary between 0.2 and 0.6 and slope parameters were randomly drawn from a uniform distribution between −2 and 2.\n\nFor each simulated dataset, the true slope parameters were estimated by fitting a standard logistic regression (‘Std.Log.’) and the Bayesian models described above. For the methods that relied on external study results, it was assumed that $$N_{ - } = N_{ + } = 100$$ and that $$T_{ + } \\sim Binomial\\left( {N_{ + } ,SN} \\right)$$ and $$T_{ - } \\sim Binomial\\left( {N_{ - } ,SP} \\right)$$. Therefore, the assumption for Bayesian model 1 (‘Bayes 1’) was that sensitivity and specificity were equal to $$\\widehat{SN} = \\frac{{T_{ + } }}{{N_{ + } }}$$ and $$\\widehat{SP} = \\frac{{T_{ - } }}{{N_{ - } }}$$. For Bayesian model 2 (‘Bayes 2’), the set of numbers $$\\left\\{ {T_{ + } ,T_{ - } ,N_{ + } ,N_{ - } } \\right\\}$$ was used to create informative priors for sensitivity and specificity. Finally, Bayesian model 3 (‘Bayes 3’), assumed that results from the gold standard diagnostic method were available for an internal validation sample consisting of a randomly chosen sample of 200 individuals (10 % of the total number of individuals).\n\nTwo criteria were used to compare the performance of these methods. The first criterion assessed how often these methods captured the true parameter values within their 95 % confidence intervals (CI). Thus, this criterion consisted of the 95 % CI coverage for dataset d and method m, given by $$C_{d,m} = \\frac{{\\mathop \\sum \\nolimits_{j = 1}^{4} I\\left( {\\hat{\\beta }_{{{\\text{j}},{\\text{d}},{\\text{m}}}}^{\\text{lo}} \\; < \\beta_{j,d} < \\hat{\\beta }_{{{\\text{j}},{\\text{d}},{\\text{m}}}}^{\\text{hi}} } \\right)}}{4}$$. In this equation, $$\\beta_{j,d}$$ is the jth true parameter value for simulated data d, and $$\\hat{\\beta }_{{{\\text{j}},{\\text{d}},{\\text{m}}}}^{\\text{lo}}$$ and $$\\hat{\\beta }_{{{\\text{j}},{\\text{d}},{\\text{m}}}}^{\\text{hi}}$$ are the jth estimated lower and upper bounds of the 95 % CI. The function I() is the indicator function, which takes on the value of one if the condition inside the parentheses is true and zero otherwise. Given that statistical significance of parameters is typically judged based on these CIs, it is critical that these intervals retain their nominal coverage. Thus, $$C_{d,m}$$ values close to 0.95 indicate better models.\n\nOne problem with the 95 % CI coverage criterion, however, is that a model might have good coverage as a result of exceedingly wide intervals, a result that is undesirable. Thus, the second criterion consisted in a summary measure that combines both bias and variance, given by the mean-squared errors (MSE). This statistic was calculated for dataset d and method m as $$MSE_{d,m} = \\frac{{\\mathop \\sum \\nolimits_{j = 1}^{4} E\\left[ {\\left( {\\beta_{j,d} - \\hat{\\beta }_{j,d,m} } \\right)^{2} } \\right]}}{4}$$, where $$\\hat{\\beta }_{j,d,m}$$ and $$\\beta_{j,d}$$ are the jth slope estimate and true parameter, respectively. Smaller values of $$MSE_{d,m}$$ indicate better model performance.\n\n### Case study\n\nCase study data came from a rural settlement area in the western Brazilian Amazon state of Acre, in a location called Ramal Granada. These data were collected in four cross-sectional surveys between 2004 and 2006, encompassing 465 individuals. Individuals were tested for malaria using both microscopy and PCR, regardless of symptoms. Additional details regarding this dataset can be found in [22, 23].\n\nMicroscopy test results were analyzed first using a standard logistic regression model, where the potential risk factors were age, time living in the study region (‘Time’), gender, participation on forest extractivism (‘Extract’), and hunting or fishing (‘Hunt/Fish’). Taking advantage of the concurrent microscopy and PCR results, the outcomes from this standard logistic regression model were then contrasted with that of Bayesian model 3.\n\nMicroscopy sensitivity is known to be strongly influenced by parasitaemia. Furthermore, it has been suggested that people in the Amazon region can develop partial clinical immunity (probably associated with lower parasitaemia) based on past cumulative exposure to low intensity malaria transmission . Because rural settlers often come from non-malarious regions, time living in the region might be a better proxy for past exposure than age . For these reasons, microscopy sensitivity was modelled as a function of age and time living in the region.\n\n## Results\n\n### Systematic literature review\n\nOf the 36 studies that satisfied the criteria, 70 % did not acknowledge imperfect detection in malaria outcome. The only articles that accounted for imperfect detection were those exclusively focused on the performance of diagnostic tests . No instances were found where imperfect detection was specifically incorporated into a logistic regression framework, despite the existence of methods to correct this problem within this modelling framework. These results suggest that malaria epidemiologists are generally unaware of the strong impact that imperfect detection can have on parameter estimates from logistic regression.\n\n### Simulations\n\nDifferences between the standard logistic regression and the proposed Bayesian models were striking regarding their 95 % credible interval (CI) coverage. The standard logistic regression had consistently lower than expected 95 % CI coverage, frequently missing the true parameter estimates (Fig. 1). For example, in the most optimistic scenario regarding the performance of the diagnostic method (scenario in which sensitivity and specificity were set to 0.9 and 0.98, respectively) only 6 % of the standard logistic regressions returned CIs that always contained the true parameter. On the other hand, the Bayesian models performed much better, frequently producing CIs that always contained the true parameters.\n\nResults also suggest that the improved 95 % CI coverage from the Bayesian models did not come at the expense of overly wide intervals. Indeed, these models greatly improved estimation of the true regression parameters under the MSE criterion compared to the standard logistic regression model (Fig. 2). The Bayesian models outperformed (i.e., had a smaller MSE) the standard logistic regression model in >78 % of the simulations. Finally, simulation results also revealed that diagnostic methods with low sensitivity and/or low specificity generally resulted in much higher MSE (notice the y-axis scale in Fig. 2), highlighting how imperfect detection can substantially hinder the ability to estimate the true regression parameters, regardless of the method employed to estimate parameters.\n\n### Case study\n\nFindings reveal that the standard logistic regression results might fail to detect important risk factors (e.g., participation in forest extractivism ‘Extract’), might over-estimate some effect sizes (e.g., participation in hunting/fishing ‘Hunt/Fish’), or might incorrectly detect a significant quadratic relationship (e.g., ‘Time2’) (left panel in Fig. 3). The Bayesian model also suggests that settlers living for a longer period of time in the region tended to have lower parasitaemia, leading to a statistically significant lower microscopy sensitivity, as well as statistically significant lower probability of infection (right panels in Fig. 3). On the other hand, age was neither a significant covariate for sensitivity nor for probability of infection.\n\n## Discussion\n\nA review of the literature shows that malaria epidemiologists seldom modify their logistic regression to accommodate for imperfect diagnostic test results. Yet, the simulations and case study illustrate the pitfalls of this approach. To address this problem, three Bayesian models are proposed that, under different assumptions regarding data availability, appropriately accounted for sensitivity and specificity of the diagnostic method and demonstrated how these methods significantly improve inference on disease risk factors. Given the widespread use of logistic regression in epidemiological studies across different geographical regions and diseases and the fact that imperfect detection methods are not restricted to malaria, this article can help improve current data collection and data analysis practice in epidemiology. For instance, awareness of how imperfect detection can bias modelling results is critical during the planning phase of data collection to ensure that the appropriate internal validation dataset is collected if one intends to use Bayesian model 3.\n\nTwo of the proposed Bayesian models (‘Bayes 1’ and ‘Bayes 2’) rely heavily on external information regarding the diagnostic method (i.e., external validation data). As a result, if this information is unreliable, then these methods might perform worse than the simulations suggest. Furthermore, a key assumption in both of these models is that sensitivity and specificity do not depend on covariates (i.e., non-differential classification). This assumption may or may not be justifiable. Thus, a third model (‘Bayes 3’) was created which relaxes this assumption and relies on a sub-sample of the individuals being tested with both the regular diagnostic and gold standard methods (i.e., internal validation sample). For this latter model, one has to be careful regarding how the sub-sample is selected; if this sample is not broadly comparable to the overall set of individuals in the study (e.g., not a random sub-sample), biases might be introduced in parameter estimates [e.g., 29]. These three models are likely to be particularly useful for researchers interested in combining abundant data from cheaper diagnostic methods (e.g., data from routine epidemiological surveillance) with limited research data collected using the gold standard method [22, 30].\n\nAn important question refers to how to determine the size of the internal validation sample. To address this, it is important to realize that Bayesian model 3 encompasses three regressions: one for the probability of being diseased, another to model sensitivity and the third to model specificity. The sensitivity regression relies on those individuals diagnosed to be positive by the gold standard method while the specificity regression relies on those with a negative diagnosis using the gold standard method. As a result, if prevalence is low, then the sensitivity regression will have very few observations and therefore trying to determine the role of several covariates on sensitivity is likely to result in an overfitted model. Similarly, if prevalence is high, the specificity regression will have very few observations and care should be taken not to overfit the model. Ultimately, the necessary size of the internal validation sample will depend on overall disease prevalence (as assessed by the gold standard method) and the number of covariates that one wants to evaluate when modelling sensitivity and specificity. Finally, an important limitation of Bayesian model 3 is the assumption that the gold standard method performs perfectly (i.e., sensitivity and specificity equal to 1), which is clearly overly optimistic [31, 32]. Developing straightforward models that avoid the assumption of a perfect gold standard method represents an important area of future research.\n\nPossible extensions of the model include allowing for correlated sensitivity and specificity or allowing for misclassification in response and exposure variables, as in [33, 34]. Furthermore, although this paper focused on the standard logistic regression, imperfect detection impacts other types of models as well, such as survival models and Poisson regression models . Finally, the benefits of using these models apply specifically to cross-sectional and cohort studies but not to case–control studies. In case–control studies, disease status is no longer random (i.e., it is fixed by design) and thus additional assumptions might be needed for the methods presented here to be applicable .\n\n## Conclusions\n\nThe standard logistic regression model has been an invaluable tool for epidemiologists for decades. Unfortunately, imperfect diagnostic test results are ubiquitous in the field and may lead to considerable bias in regression parameter estimates. Given the numerous diagnostic methods employed by malaria researchers and the ubiquitous use of logistic regression to model the results of these diagnostic methods, this paper provides critical guidelines to improve data analysis practice in the presence of misclassification error. Easy-to-use code is provided that can be readily adapted to WinBUGS and enables straightforward implementation of the proposed Bayesian models. The time is ripe to improve upon the standard logistic regression and better address the challenge of modelling imperfect diagnostic test results.\n\n## References\n\n1. 1.\n\nBarbosa S, Gozze AB, Lima NF, Batista CL, Bastos MDS, Nicolete VC, et al. Epidemiology of disappearing Plasmodium vivax malaria: a case study in rural Amazonia. PLOS Negl Trop Dis. 2014;8:e3109.\n\n2. 2.\n\nAcosta POA, Granja F, Meneses CA, Nascimento IAS, Sousa DD, Lima Junior WP, et al. False-negative dengue cases in Roraima, Brazil: an approach regarding the high number of negative results by NS1 AG kits. Rev Inst Med Trop Sao Paulo. 2014;56:447–50.\n\n3. 3.\n\nWeigle KA, Labrada LA, Lozano C, Santrich C, Barker DC. PCR-based diagnosis of acute and chronic cutaneous leishmaniasis caused by Leishmania (Viannia). J Clin Microbiol. 2002;40:601–6.\n\n4. 4.\n\nBaiden F, Webster J, Tivura M, Delimini R, Berko Y, Amenga-Etego S, et al. Accuracy of rapid tests for malaria and treatment outcomes for malaria and non-malaria cases among under-five children in rural Ghana. PLoS One. 2012;7:e34073.\n\n5. 5.\n\nPeeling RW, Artsob H, Pelegrino JL, Buchy P, Cardosa MJ, Devi S, et al. Evaluation of diagnostic tests: dengue. Nat Rev Microbiol. 2010;8:S30–8.\n\n6. 6.\n\nAmato Neto V, Amato VS, Tuon FF, Gakiya E, de Marchi CR, de Souza RM, et al. False-positive results of a rapid K39-based strip test and Chagas disease. Int J Infect Dis. 2009;13:182–5.\n\n7. 7.\n\nSundar S, Reed SG, Singh VP, Kumar PCK, Murray HW. Rapid accurate field diagnosis of Indian visceral leishmaniasis. Lancet. 1998;351:563–5.\n\n8. 8.\n\nMabey D, Peeling RW, Ustianowski A, Perkins MD. Diagnostics for the developing world. Nat Rev Microbiol. 2004;2:231–40.\n\n9. 9.\n\nJoseph L, Gyorkos TW, Coupal L. Bayesian estimation of disease prevalence and the parameters of diagnostic tests in the absence of a gold standard. Am J Epidemiol. 1995;141:263–72.\n\n10. 10.\n\nSpeybroeck N, Praet N, Claes F, van Hong N, Torres K, Mao S, et al. True versus apparent malaria infection prevalence: the contribution of a Bayesian approach. PLoS One. 2011;6:e16705.\n\n11. 11.\n\nSpeybroeck N, Devleesschauwer B, Joseph L, Berkvens D. Misclassification errors in prevalence estimation: Bayesian handling with care. Int J Public Health. 2013;58:791–5.\n\n12. 12.\n\nNeuhaus JM. Bias and efficiency loss due to misclassified responses in binary regression. Biometrika. 1999;86:843–55.\n\n13. 13.\n\nDuffy SW, Warwick J, Williams ARW, Keshavarz H, Kaffashian F, Rohan TE, et al. A simple model for potential use with a misclassified binary outcome in epidemiology. J Epidemiol Commun Health. 2004;58:712–7.\n\n14. 14.\n\nChen Q, Galfalvy H, Duan N. Effects of disease misclassification on exposure-disease association. Am J Public Health. 2013;103:e67–73.\n\n15. 15.\n\nMagder LS, Hughes JP. Logistic regression when the outcome is measured with uncertainty. Am J Epidemiol. 1997;146:195–203.\n\n16. 16.\n\nLyles RH, Tang L, Superak HM, King CC, Celentano DD, Lo Y, et al. Validation data-based adjustments for outcome misclassification in logistic regression: an illustration. Epidemiology. 2011;22:589–97.\n\n17. 17.\n\nFox MP, Lash TL, Greenland S. A method to automate probabilistic sensitivity analyses of misclassified binary variables. Int J Epidemiol. 2005;34:1370–6.\n\n18. 18.\n\nMcInturff P, Johnson WO, Cowling D, Gardner IA. Modelling risk when binary outcomes are subject to error. Stat Med. 2004;23:1095–109.\n\n19. 19.\n\nValle D, Clark J. Improving the modeling of disease data from the government surveillance system: a case study on malaria in the Brazilian Amazon. PLoS Comput Biol. 2013;9:e1003312.\n\n20. 20.\n\nStamey JD, Young DM, Seaman JW Jr. A Bayesian approach to adjust for diagnostic misclassification between two mortality causes in Poisson regression. Stat Med. 2008;27:2440–52.\n\n21. 21.\n\nO’Meara WP, Barcus M, Wongsrichanalai C, Muth S, Maguire JD, Jordan RG, et al. Reader technique as a source of variability in determining malaria parasite density by microscopy. Malar J. 2006;5:118.\n\n22. 22.\n\nValle D, Clark J, Zhao K. Enhanced understanding of infectious diseases by fusing multiple datasets: a case study on malaria in the Western Brazilian Amazon region. PLoS One. 2011;6:e27462.\n\n23. 23.\n\nSilva-Nunes MD, Codeco CT, Malafronte RS, da Silva NS, Juncansen C, Muniz PT, et al. Malaria on the Amazonian frontier: transmission dynamics, risk factors, spatial distribution, and prospects for control. Am J Trop Med Hyg. 2008;79:624–35.\n\n24. 24.\n\nLadeia-Andrade S, Ferreira MU, de Carvalho ME, Curado I, Coura JR. Age-dependent acquisition of protective immunity to malaria in riverine populations of the Amazon Basin of Brazil. Am J Trop Med Hyg. 2009;80:452–9.\n\n25. 25.\n\nAlves FP, Durlacher RR, Menezes MJ, Krieger H, da Silva LHP, Camargo EP. High prevalence of asymptomatic Plasmodium vivax and Plasmodium falciparum infections in native Amazonian populations. Am J Trop Med Hyg. 2002;66:641–8.\n\n26. 26.\n\nMtove G, Nadjm B, Amos B, Hendriksen ICE, Muro F, Reyburn H. Use of an HRP2-based rapid diagnostic test to guide treatment of children admitted to hospital in a malaria-endemic area of north-east Tanzania. Trop Med Int Health. 2011;16:545–50.\n\n27. 27.\n\nOnchiri FM, Pavlinac PB, Singa BO, Naulikha JM, Odundo EA, Farguhar C, et al. Frequency and correlates of malaria over-treatment in areas of differing malaria transmission: a cross-sectional study in rural Western Kenya. Malar J. 2015;14:97.\n\n28. 28.\n\nvan Genderen PJJ, van der Meer IM, Consten J, Petit PLC, van Gool T, Overbosch D. Evaluation of plasma lactate as a parameter for disease severity on admission in travelers with Plasmodium falciparum malaria. J Travel Med. 2005;12:261–4.\n\n29. 29.\n\nAlonzo TA, Pepe MS, Lumley T. Estimating disease prevalence in two-phase studies. Biostatistics. 2003;4:313–26.\n\n30. 30.\n\nHalloran ME, Longini IM Jr. Using validation sets for outcomes and exposure to infection in vaccine field studies. Am J Epidemiol. 2001;154:391–8.\n\n31. 31.\n\nBlack MA, Craig BA. Estimating disease prevalence in the absence of a gold standard. Stat Med. 2002;21:2653–69.\n\n32. 32.\n\nBrenner H. Correcting for exposure misclassification using an alloyed gold standard. Epidemiology. 1996;7:406–10.\n\n33. 33.\n\nTang L, Lyles RH, King CC, Celentano DD, Lo Y. Binary regression with differentially misclassified response and exposure variables. Stat Med. 2015;34:1605–20.\n\n34. 34.\n\nTang L, Lyles RH, King CC, Hogan JW, Lo Y. Regression analysis for differentially misclassified correlated binary outcomes. J R Stat Soc Ser C Appl Stat. 2015;64:433–49.\n\n35. 35.\n\nRichardson BB, Hughes JP. Product limit estimation for infectious disease data when the diagnostic test for the outcome is measured with uncertainty. Biostatistics. 2000;1:341–54.\n\n## Authors’ contributions\n\nDV performed data analyses, derived the main results in the article, wrote the initial draft of the manuscript. JM and PA conducted the systematic literature review. JMTL, JM, PA, and UH reviewed the manuscript and provided critical feedback. All authors read and approved the final manuscript.\n\n### Acknowledgements\n\nWe thank Gregory Glass, Song Liang and Justin Lessler for providing comments on an earlier draft of this manuscript.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Denis Valle.\n\nCode for the Bayesian models used in the main manuscript.\n\nSimulated data used to illustrate Bayesian model 1.\n\nSimulated data used to illustrate Bayesian model 2.",
null,
""
] | [
null,
"https://malariajournal.biomedcentral.com/track/article/10.1186/s12936-015-0966-y",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87866205,"math_prob":0.9383508,"size":35449,"snap":"2021-43-2021-49","text_gpt3_token_len":7716,"char_repetition_ratio":0.14518268,"word_repetition_ratio":0.040553525,"special_character_ratio":0.2223194,"punctuation_ratio":0.14546026,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98532116,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T14:47:27Z\",\"WARC-Record-ID\":\"<urn:uuid:2eb0078d-b102-465d-9f13-90f8b4c179d4>\",\"Content-Length\":\"271825\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fab69c9a-afcb-4f2b-ac27-5b1d6c84432e>\",\"WARC-Concurrent-To\":\"<urn:uuid:61240eb4-7f34-4b37-a968-c74cdda7bb2a>\",\"WARC-IP-Address\":\"199.232.64.95\",\"WARC-Target-URI\":\"https://malariajournal.biomedcentral.com/articles/10.1186/s12936-015-0966-y\",\"WARC-Payload-Digest\":\"sha1:E35Q5AWO2UEBIQW56IX254D2WNIS63SR\",\"WARC-Block-Digest\":\"sha1:2XYKNIUGB4ABCXOD5K6YHIEHKUYDDMJA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585270.40_warc_CC-MAIN-20211019140046-20211019170046-00146.warc.gz\"}"} |
https://socratic.org/questions/find-the-limit-as-x-approaches-infinity-of-x-1-x | [
"# Find the limit as x approaches infinity of x ^(1 /x)?\n\nSep 8, 2014\n\nBy using logarithmic properties,\n${\\lim}_{x \\to \\infty} {x}^{\\frac{1}{x}} = 1$\n\nLet us look at some details.\nSince $x = {e}^{\\ln x}$,\n${\\lim}_{x \\to \\infty} {x}^{\\frac{1}{x}} = {\\lim}_{x \\to \\infty} {e}^{\\ln {x}^{\\frac{1}{x}}}$\nby the property $\\ln {x}^{r} = r \\ln x$,\n$= {\\lim}_{x \\to \\infty} {e}^{\\frac{\\ln x}{x}}$\nby squeeze the limit in the exponent,\n$= {e}^{{\\lim}_{x \\to \\infty} \\frac{\\ln x}{x}}$\nby l'Hopital's Rule,\n${e}^{{\\lim}_{x \\to \\infty} \\frac{\\frac{1}{x}}{1}} = {e}^{0} = 1$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6459702,"math_prob":1.0000087,"size":346,"snap":"2022-27-2022-33","text_gpt3_token_len":86,"char_repetition_ratio":0.10818713,"word_repetition_ratio":0.0,"special_character_ratio":0.23988439,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000076,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T10:59:27Z\",\"WARC-Record-ID\":\"<urn:uuid:cac9670b-af25-487b-bbcc-7808decc420c>\",\"Content-Length\":\"33085\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8904549-8d5f-4fbf-b627-9bbeaf05852c>\",\"WARC-Concurrent-To\":\"<urn:uuid:1934e866-0b65-420a-bfed-4f869dafc5b0>\",\"WARC-IP-Address\":\"216.239.36.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/find-the-limit-as-x-approaches-infinity-of-x-1-x\",\"WARC-Payload-Digest\":\"sha1:AZCROHGOJV7IBYD4OBSFP24BTIZBKHJJ\",\"WARC-Block-Digest\":\"sha1:4DPN3WYICVAVRKYVOBJDGAFRZHSMSRCU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573667.83_warc_CC-MAIN-20220819100644-20220819130644-00064.warc.gz\"}"} |
https://calcpercentage.com/what-is-180-percent-of-341 | [
"# PercentageCalculator, What is 180% of 341?\n\n## What is 180 percent of 341? 180% of 341 is equal to 613.8\n\n%\n\n### How to Calculate 180 Percent of 341?\n\n• F\n\nFormula\n\n(180 ÷ 100) x 341 = 613.8\n\n• 1\n\nConvert percent to decimal\n\n180% to decimal is 180 ÷ 100 = 1.8\n\n• 2\n\nMultiply the decimal number with the second number\n\n1.8 x 341 = 613.8\n\n#### Example\n\nFor example, John needs 180 percent of the shares to be in power. The number of shares is 341. How many shares does John need to buy? What is 180 percent of 341? 180 percent to decimal is equal 180 / 100 = 1.8 1.8 x 341 = 613.8 so John need to buy 613.8 shares"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88860726,"math_prob":0.9996481,"size":425,"snap":"2022-27-2022-33","text_gpt3_token_len":142,"char_repetition_ratio":0.15201901,"word_repetition_ratio":0.023255814,"special_character_ratio":0.4282353,"punctuation_ratio":0.1262136,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99951386,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T15:16:43Z\",\"WARC-Record-ID\":\"<urn:uuid:5166c2d7-38ac-41d8-bf8c-8d8dd7117ca7>\",\"Content-Length\":\"11958\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2e0e500-f047-4076-8b7d-eaddd62b8215>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc60579c-0c9b-46ec-a0f2-188407cf4414>\",\"WARC-IP-Address\":\"76.76.21.9\",\"WARC-Target-URI\":\"https://calcpercentage.com/what-is-180-percent-of-341\",\"WARC-Payload-Digest\":\"sha1:OTZB4ZHQT2WV3TWAQFD3CM4ZXPUNDOEN\",\"WARC-Block-Digest\":\"sha1:P5GLQNUV7LUFAQQ7S5GUXESYA6AGUIPN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103556871.29_warc_CC-MAIN-20220628142305-20220628172305-00317.warc.gz\"}"} |
https://numbermatics.com/n/2374409/ | [
"# 2374409\n\n## 2,374,409 is an odd composite number composed of two prime numbers multiplied together.\n\nWhat does the number 2374409 look like?\n\nThis visualization shows the relationship between its 2 prime factors (large circles) and 4 divisors.\n\n2374409 is an odd composite number. It is composed of two distinct prime numbers multiplied together. It has a total of four divisors.\n\n## Prime factorization of 2374409:\n\n### 101 × 23509\n\nSee below for interesting mathematical facts about the number 2374409 from the Numbermatics database.\n\n### Names of 2374409\n\n• Cardinal: 2374409 can be written as Two million, three hundred seventy-four thousand, four hundred nine.\n\n### Scientific notation\n\n• Scientific notation: 2.374409 × 106\n\n### Factors of 2374409\n\n• Number of distinct prime factors ω(n): 2\n• Total number of prime factors Ω(n): 2\n• Sum of prime factors: 23610\n\n### Divisors of 2374409\n\n• Number of divisors d(n): 4\n• Complete list of divisors:\n• Sum of all divisors σ(n): 2398020\n• Sum of proper divisors (its aliquot sum) s(n): 23611\n• 2374409 is a deficient number, because the sum of its proper divisors (23611) is less than itself. Its deficiency is 2350798\n\n### Bases of 2374409\n\n• Binary: 10010000111011000010012\n• Base-36: 1EW3T\n\n### Squares and roots of 2374409\n\n• 2374409 squared (23744092) is 5637818099281\n• 2374409 cubed (23744093) is 13386486035295699929\n• The square root of 2374409 is 1540.9117430923\n• The cube root of 2374409 is 133.4090146733\n\n### Scales and comparisons\n\nHow big is 2374409?\n• 2,374,409 seconds is equal to 3 weeks, 6 days, 11 hours, 33 minutes, 29 seconds.\n• To count from 1 to 2,374,409 would take you about five weeks!\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 2374409 cubic inches would be around 11.1 feet tall.\n\n### Recreational maths with 2374409\n\n• 2374409 backwards is 9044732\n• The number of decimal digits it has is: 7\n• The sum of 2374409's digits is 29\n• More coming soon!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8522367,"math_prob":0.90414816,"size":2822,"snap":"2021-04-2021-17","text_gpt3_token_len":758,"char_repetition_ratio":0.13271824,"word_repetition_ratio":0.04357798,"special_character_ratio":0.33486888,"punctuation_ratio":0.17222223,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9945973,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-17T00:36:34Z\",\"WARC-Record-ID\":\"<urn:uuid:3cc9f550-4fc8-4d59-a034-db082c448c91>\",\"Content-Length\":\"17277\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03aeca83-c608-414c-8798-add7b391de3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:46b2e3c2-36c5-41d6-b9ed-f114a81d4766>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/2374409/\",\"WARC-Payload-Digest\":\"sha1:D65XCLGI5GIBV3TBF65IXDD7VCR2XTAU\",\"WARC-Block-Digest\":\"sha1:VGQPM4C7FOVYELQBCAZRBNL5SEWOPOVR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703507971.27_warc_CC-MAIN-20210116225820-20210117015820-00091.warc.gz\"}"} |
https://elearning.vector.com/mod/page/view.php?id=390 | [
"## FlexRay Bus Level\n\nPhysical signal transmission in a FlexRay cluster is based on transmission of voltage differences (differential signal transmission). Therefore, the transmission medium (FlexRay bus) consists of the two lines Bus Plus (BP) and Bus Minus (BM).\n\nThe Electrical Physical Layer Specification defines four bus levels, which are assigned either to the recessive or the dominant bus state. The recessive bus state is characterized by a differential voltage of 0 Volt. The dominant bus state, on the other hand, has a differential voltage not equal to zero Volt.\n\nBoth, the idle bus level and the idle low power bus level, are recessive. The idle bus level is characterized by the two lines exhibiting a 2.5 Volt potential, resulting in a differential voltage of 0 Volt. The valid range for the idle bus level lies between 1.8 Volt and 3.2 Volt.\n\nThe idle low power bus level appears on the FlexRay bus when all FlexRay transceivers are in their low-power mode. This bus level is also characterized by a differential voltage of 0 Volt, but in this case the lines exhibit a potential of 0 Volt. The valid range here is between -0.2 Volt and 0.2 Volt.\n\nThe two bus levels Data_1 and Data_0 are dominant bus levels. With the Data_1 bus level the BP bus line exhibits a potential of 3.5 Volt, and the BM bus line a potential of 1.5 Volt. The resulting differential voltage is 2 Volt. The Data_1 bus level represents logical one.\n\nWith the Data_0 bus level the BP bus line has a potential of 1.5 Volt, and the BM bus line a potential of 3.5 Volt. The resulting differential voltage is -2 Volt. The Data_0 bus level represents logical zero.\n\nBus states and bus levels can be seen in the figure “FlexRay Bus Level”. Also shown are the relevant voltage thresholds for sender and receiver.",
null,
""
] | [
null,
"https://elearning.vector.com/pluginfile.php/565/mod_page/content/3/FR_2.6_GRA_FlexRayBus_EN.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8688496,"math_prob":0.9170257,"size":1944,"snap":"2021-43-2021-49","text_gpt3_token_len":477,"char_repetition_ratio":0.17216495,"word_repetition_ratio":0.098802395,"special_character_ratio":0.22839506,"punctuation_ratio":0.11658031,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95523,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T11:33:49Z\",\"WARC-Record-ID\":\"<urn:uuid:61f594be-edd8-4a4b-b03d-d5e4c9f5da4a>\",\"Content-Length\":\"44958\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6fec1676-480a-415e-89bb-63e2a11e8d8e>\",\"WARC-Concurrent-To\":\"<urn:uuid:e2149634-9154-477c-bb46-e5a4ed887262>\",\"WARC-IP-Address\":\"212.211.181.46\",\"WARC-Target-URI\":\"https://elearning.vector.com/mod/page/view.php?id=390\",\"WARC-Payload-Digest\":\"sha1:BPNHYV2RH5D3Z7TQFY2LXUQ5ATOTEH4F\",\"WARC-Block-Digest\":\"sha1:YKZX3VOFBRK5ARBFWKIMVBS3AUQ6EPRR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587877.85_warc_CC-MAIN-20211026103840-20211026133840-00499.warc.gz\"}"} |
https://chemistry.stackexchange.com/questions/2422/find-mass-of-co%E2%82%82-and-heat-released-per-minute-in-a-combustion-reaction | [
"# Find mass of CO₂ and heat released per minute in a combustion reaction\n\nI am not sure where to start with this problem. I know that $\\small q = mc(T_f-T_i)$, but that seems like it would not help here. I think I need to balance the equation, but I am not sure how the rate factors into the problem. Anyway, if anyone could help me find the relevant equations/method for solving this problem, it would be greatly appreciated:\n\nA natural gas mixture is burned in a furnace at a power-generating station at a rate of 13.0 mol per minute.\n\n1. If the fuel consists of 9.3 mole $\\small\\ce{CH4}$, 3.1 mole $\\small\\ce{C2H6}$, 0.40 mol $\\small\\ce{C3H8}$, and 0.20 mole $\\small\\ce{C4H10}$, what mass of $\\small\\ce{CO2} (g)$ is produced per minute?\n2. How much heat is released per minute?\n• I think it's safe to treat each of those reactions separately (so CH4 will reaction with O2 to form CO2 and H2O without being affected by C2H6, which will itself react with O2, etc.). In terms of heat released, you probably have a table of enthalpies (deltaH) of formation for each of those reactions in a table or something. – jonsca Oct 22 '12 at 6:03"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9418989,"math_prob":0.9803479,"size":702,"snap":"2019-43-2019-47","text_gpt3_token_len":204,"char_repetition_ratio":0.13323782,"word_repetition_ratio":0.0,"special_character_ratio":0.2962963,"punctuation_ratio":0.12658228,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98862725,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T08:27:46Z\",\"WARC-Record-ID\":\"<urn:uuid:0844cb41-a72e-4f97-b312-d75a8cf5ba38>\",\"Content-Length\":\"132502\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca27ee19-524d-4b0e-b396-639a71202c1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:fade1f98-3d8e-4e10-83a4-1eee857b28f4>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/2422/find-mass-of-co%E2%82%82-and-heat-released-per-minute-in-a-combustion-reaction\",\"WARC-Payload-Digest\":\"sha1:MU47D3PCYI6AAPOEQUYG5BA3QNZFEPXL\",\"WARC-Block-Digest\":\"sha1:VC4KTNLT4NXXIOXHAZEZ2EJ2GDT7WZU7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496664808.68_warc_CC-MAIN-20191112074214-20191112102214-00407.warc.gz\"}"} |
https://www.bartleby.com/solution-answer/chapter-126-problem-33acp-chemistry-and-chemical-reactivity-10th-edition/9781337399074/the-density-of-gray-tin-is-5769-gcm3-determine-the-dimensions-in-pm-of-the-cubic-crystal/0c4d6ab8-73df-11e9-8385-02ee952b546e | [
"",
null,
"",
null,
"",
null,
"Chapter 12.6, Problem 3.3ACP\n\nChapter\nSection\nTextbook Problem\n\nThe density of gray tin is 5.769 g/cm3. Determine the dimensions (in pm) of the cubic crystal lattice.\n\nInterpretation Introduction\n\nInterpretation:\n\nThe dimension of the cubic crystal lattice in gray tin has to be calculated.\n\nConcept introduction:\n\nTo find formula for the dimension of the atom is given below,\n\nLength of one side=V3\n\nExplanation\n\nNumber of atoms per unit is given below,\n\nIt has 8 atoms in the corner, 6 atom in the face, 4 atoms in the body.\n\nNumber of atoms per unit= (8corneratoms×1/8+6faceatom×1/2+4bodyatoms×1)Number of atoms per unit=8\n\ndensity =5.769g/cm3molarmassoftin=118.710 g/mol\n\nZ = number of atoms per unit.\n\nThe Avogadro number NA=6.02×1023\n\nMassoftheunitcell=8 × 118.7106.023×1023Massoftheunitcell=1\n\nStill sussing out bartleby?\n\nCheck out a sample textbook solution.\n\nSee a sample solution\n\nThe Solution to Your Study Problems\n\nBartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!\n\nGet Started\n\nFind more solutions based on key concepts",
null,
""
] | [
null,
"https://www.bartleby.com/static/search-icon-white.svg",
null,
"https://www.bartleby.com/static/close-grey.svg",
null,
"https://www.bartleby.com/static/solution-list.svg",
null,
"https://www.bartleby.com/static/logo.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.72460973,"math_prob":0.9561179,"size":2232,"snap":"2019-43-2019-47","text_gpt3_token_len":488,"char_repetition_ratio":0.20511669,"word_repetition_ratio":0.098684214,"special_character_ratio":0.17428315,"punctuation_ratio":0.09620991,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940036,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T19:33:35Z\",\"WARC-Record-ID\":\"<urn:uuid:38dd9a4f-fbcb-4f5b-8ad6-8406162ed4f9>\",\"Content-Length\":\"315437\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca4efad8-80c4-4b23-91a9-10d18f1118e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0f254aa-a022-4bc3-82d6-2f60120277dd>\",\"WARC-IP-Address\":\"99.84.181.3\",\"WARC-Target-URI\":\"https://www.bartleby.com/solution-answer/chapter-126-problem-33acp-chemistry-and-chemical-reactivity-10th-edition/9781337399074/the-density-of-gray-tin-is-5769-gcm3-determine-the-dimensions-in-pm-of-the-cubic-crystal/0c4d6ab8-73df-11e9-8385-02ee952b546e\",\"WARC-Payload-Digest\":\"sha1:VY3BJCLEM4QUSKKTJ3PIUYNUGNGXCWBY\",\"WARC-Block-Digest\":\"sha1:FV37RMBOUECRW7IL6BFX6CILFFLR5JOX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986697760.44_warc_CC-MAIN-20191019191828-20191019215328-00060.warc.gz\"}"} |
https://physics.stackexchange.com/questions/146553/biot-savart-and-amperes-law-inconsistentcy | [
"# Biot-Savart and Amperes law inconsistentcy\n\nImagine the situation below where a steady current $I$ goes through the line. The radius of the semicircle is $r$. How do I compute the magnetic flux in the point $P$, marked by a black dot?",
null,
"Furthermore, I don't know the answer, and the two attempts that I have made gave me different results. I want to have this inconsistency resolved:\n\nSolution 1. Biot-Savart law:\n\n$$\\mathbf{B} = \\frac{\\mu_0 I}{4\\pi r^2}\\int_C\\rm{d}\\mathbf{l}\\times \\mathbf{\\hat{r}} = \\frac{\\mu_0 I}{4\\pi r^2}\\int_C \\rm{dl}=\\frac{\\mu_0 I}{4r}$$\n\nsince $\\rm{d}\\mathbf{l}\\times \\mathbf{\\hat{r}} = dl\\cdot 1\\cdot \\rm{sin}(\\frac{\\pi}{2})=dl$ and $\\int_C dl=\\pi r$ since the curve is the semicircle.\n\nSolution 2. Ampere's lag $$\\oint \\mathbf{B}\\cdot \\rm{d}\\mathbf{l} = B\\cdot 2\\pi r = I\\mu_0\\Rightarrow B = \\frac{I\\mu_0}{2\\pi r}$$\n\n• Could you explain the second method? I don't understand it. What curve are you integrating around and how did you do the integral? – Brian Moths Nov 13 '14 at 15:31\n• @NowIGetToLearnWhatAHeadIs In the second integral I am integrating along a circle. The circle passes through the point $P$ and the current passes through its center. You can think of the circle has going around the middle of the semicircle. Since everything is symmetric, I expected the magnetic flux to be constant along this circle. – user714 Nov 13 '14 at 15:36\n• Which, given Alfred's answer, I now understand is wrong of course... – user714 Nov 13 '14 at 15:46\n\nOne can make an educated guess that the magnetic field at $P$ is one-half the value of the magnetic field at the center of a current loop which is given by:\n$$B = \\frac{\\mu_0 I}{2R}$$\nThe approach of your solution 2 doesn't make any sense to me. The integral is along a closed path and, evidently, you're assuming $\\mathbf B \\cdot d\\mathbf l$ is a constant along a closed path of constant $r$?\n• Please see my comment to the question which explains the second integral. Although I am sure you have the right reason for it being wrong. Why did you get divided by $2R$ instead of $4R$ which is what I got? – user714 Nov 13 '14 at 15:39"
] | [
null,
"https://i.stack.imgur.com/u6yjh.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7791524,"math_prob":0.9934215,"size":796,"snap":"2020-34-2020-40","text_gpt3_token_len":287,"char_repetition_ratio":0.135101,"word_repetition_ratio":0.0,"special_character_ratio":0.34296483,"punctuation_ratio":0.07185629,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99963593,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-11T13:47:11Z\",\"WARC-Record-ID\":\"<urn:uuid:f78ceef2-b41d-4378-ac1f-9f47cc2a81ff>\",\"Content-Length\":\"148836\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:36c1f8c9-5736-491a-a0d7-e92193dc6683>\",\"WARC-Concurrent-To\":\"<urn:uuid:0643b82a-b9a7-4145-850e-7abbac166df8>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/146553/biot-savart-and-amperes-law-inconsistentcy\",\"WARC-Payload-Digest\":\"sha1:V453CNZHGPQVUE3XG7JT5RBHTLRO6QAK\",\"WARC-Block-Digest\":\"sha1:RPC22SBNOID5OFQT6R5EVSCT5V6OFLRG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738777.54_warc_CC-MAIN-20200811115957-20200811145957-00554.warc.gz\"}"} |
http://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/10/7/a/ | [
"Properties\n\n Label 10.7.a Level 10 Weight 7 Character orbit a Rep. character $$\\chi_{10}(1,\\cdot)$$ Character field $$\\Q$$ Dimension 0 Newforms 0 Sturm bound 10 Trace bound 0\n\nRelated objects\n\n Level: $$N$$ = $$10 = 2 \\cdot 5$$ Weight: $$k$$ = $$7$$ Character orbit: $$[\\chi]$$ = 10.a (trivial) Character field: $$\\Q$$ Newforms: $$0$$ Sturm bound: $$10$$ Trace bound: $$0$$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.621833,"math_prob":1.00001,"size":351,"snap":"2019-43-2019-47","text_gpt3_token_len":125,"char_repetition_ratio":0.16714698,"word_repetition_ratio":0.0,"special_character_ratio":0.43589744,"punctuation_ratio":0.16393442,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999831,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T07:38:44Z\",\"WARC-Record-ID\":\"<urn:uuid:1a5eae6b-960b-42d2-b45b-67bfe8038232>\",\"Content-Length\":\"16478\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99a0ba50-dc52-414b-a650-e955e7be8356>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9ba17db-9a04-48ae-9cee-b276cb511c81>\",\"WARC-IP-Address\":\"35.241.19.59\",\"WARC-Target-URI\":\"http://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/10/7/a/\",\"WARC-Payload-Digest\":\"sha1:PWD7GMIS6YPQUEBU7VGYE4QQH4QBTHFQ\",\"WARC-Block-Digest\":\"sha1:O24C632USD2OBA57NP6SR6IILVMMZKVG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986673250.23_warc_CC-MAIN-20191017073050-20191017100550-00544.warc.gz\"}"} |
https://sindro.me/posts/2009-02-20-the-blinking-border/ | [
"This is the obfuscated piece of Javascript code that implements the red border and loads Google Analytics on the Segmentation Fault site :\n\n``````var theLoadSequenceToRunAfterTheDocumentHasBeenLoaded = function() {\n\n//\n(function(t){// (C) 2009 vjt <[email protected]>\nvar \\$=function(_){return(document.getElementById(_));};var ee =[\n\\$('n'),\\$('s'),\\$('w'),\\$('e')],e,_=true;setInterval(function(){for\n(var i=ee.length;i&&(e=ee[--i]) ;_) {e.className=e.className?'':\n'b';}},t*08); /* .oOo.oOo.oOo. ^^^^^ -*** * *** *** *******- **/\n})((4 + 8 + 15 + 16 + 23 + 42) * Math.PI / Math.E + 42/*166.81*/);"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.58072406,"math_prob":0.94915813,"size":1369,"snap":"2023-40-2023-50","text_gpt3_token_len":374,"char_repetition_ratio":0.07472528,"word_repetition_ratio":0.0,"special_character_ratio":0.30898467,"punctuation_ratio":0.19433199,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97132516,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T16:14:40Z\",\"WARC-Record-ID\":\"<urn:uuid:43361423-50fe-4d65-bc96-0854fd3df028>\",\"Content-Length\":\"30139\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bad0d380-057c-4533-ad72-dff241c5a64e>\",\"WARC-Concurrent-To\":\"<urn:uuid:02c745a6-cf94-4cbe-9ccf-e10ee8ad02e8>\",\"WARC-IP-Address\":\"46.38.233.77\",\"WARC-Target-URI\":\"https://sindro.me/posts/2009-02-20-the-blinking-border/\",\"WARC-Payload-Digest\":\"sha1:VTHHB3TRN4YSCRYMHN67TZ2JSKHT5ZJX\",\"WARC-Block-Digest\":\"sha1:KZHFV22KSAQ4RTVVXSUYCWOPBSMTI3VR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100762.64_warc_CC-MAIN-20231208144732-20231208174732-00808.warc.gz\"}"} |
https://support.microsoft.com/en-us/office/sequence-function-57467a98-57e0-4817-9f14-2eb78519ca90?ui=en-us&rs=en-us&ad=us | [
"Hello,\nSelect a different account.\nYou have multiple accounts\n\nThe SEQUENCE function allows you to generate a list of sequential numbers in an array, such as 1, 2, 3, 4.\n\nIn the following example, we created an array that's 4 rows tall by 5 columns wide with =SEQUENCE(4,5).",
null,
"=SEQUENCE(rows,[columns],[start],[step])\n\nArgument\n\nDescription\n\nrows\n\nRequired\n\nThe number of rows to return\n\n[columns]\n\nOptional\n\nThe number of columns to return\n\n[start]\n\nOptional\n\nThe first number in the sequence\n\n[step]\n\nOptional\n\nThe amount to increment each subsequent value in the array\n\nNotes:\n\n• Any missing optional arguments will default to 1. If you omit the rows argument, you must provide at least one other argument.\n\n• An array can be thought of as a row of values, a column of values, or a combination of rows and columns of values. In the example above, the array for our SEQUENCE formula is range C1:G4.\n\n• The SEQUENCE function will return an array, which will spill if it's the final result of a formula. This means that Excel will dynamically create the appropriate sized array range when you press ENTER. If your supporting data is in an Excel table, then the array will automatically resize as you add or remove data from your array range if you're using structured references. For more details, see this article on spilled array behavior.\n\n• Excel has limited support for dynamic arrays between workbooks, and this scenario is only supported when both workbooks are open. If you close the source workbook, any linked dynamic array formulas will return a #REF! error when they are refreshed.\n\n## Example\n\nIf you need to create a quick sample dataset, here's an example using SEQUENCE with TEXT, DATE, YEAR, and TODAY to create a dynamic list of months for a header row, where the underlying date will always be the current year. Our formula is: =TEXT(DATE(YEAR(TODAY()),SEQUENCE(1,6),1),\"mmm\").",
null,
"Here's an example of nesting SEQUENCE with INT and RAND to create a 5 row by 6 column array with a random set of increasing integers. Our formula is: =SEQUENCE(5,6,INT(RAND()*100),INT(RAND()*100)).",
null,
"In addition, you could use =SEQUENCE(5,1,1001,1000) to create the sequential list of GL Code numbers in the examples.\n\n## Need more help?\n\nYou can always ask an expert in the Excel Tech Community or get support in Communities.\n\n### Want more options?\n\nExplore subscription benefits, browse training courses, learn how to secure your device, and more.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
""
] | [
null,
"https://support.content.office.net/en-us/media/366f259c-c72d-4b14-8969-41ed30a3e88d.png",
null,
"https://support.content.office.net/en-us/media/67d36c71-2823-4342-968c-e0a24aff2b2b.png",
null,
"https://support.content.office.net/en-us/media/428f060c-46c1-4452-b7d9-dab41251b477.png",
null,
"https://support.content.office.net/en-us/media/f4e85874-2a1a-438d-9c3c-17b069c454c0.png",
null,
"https://support.content.office.net/en-us/media/a9241eee-a729-4513-97b4-5b87c381c21b.png",
null,
"https://support.content.office.net/en-us/media/9e557d93-f803-44df-a274-1282d542cf63.png",
null,
"https://support.content.office.net/en-us/media/fbf6e41b-ddbe-43db-a616-7a8e48d43d18.png",
null,
"https://support.content.office.net/en-us/media/9255871d-06a6-4de5-9236-5fd7af100c5c.png",
null,
"https://support.content.office.net/en-us/media/ccb7c2a6-17dd-4cc3-88b7-8da966e59f59.png",
null,
"https://support.content.office.net/en-us/media/bcd2fdf1-530a-482f-b96d-5f2f2a49ac66.png",
null,
"https://support.content.office.net/en-us/media/f4e85874-2a1a-438d-9c3c-17b069c454c0.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7475378,"math_prob":0.9255097,"size":2297,"snap":"2023-40-2023-50","text_gpt3_token_len":540,"char_repetition_ratio":0.120366335,"word_repetition_ratio":0.0,"special_character_ratio":0.237266,"punctuation_ratio":0.12663755,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96293104,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,6,null,6,null,6,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T08:13:05Z\",\"WARC-Record-ID\":\"<urn:uuid:333e003a-bb2e-46ca-887f-c3389f893d78>\",\"Content-Length\":\"131334\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9d3b8b17-b771-49cd-8ba6-c400be849a2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:2cc061be-75ef-41ca-9cad-36ab6caaf39c>\",\"WARC-IP-Address\":\"23.50.124.110\",\"WARC-Target-URI\":\"https://support.microsoft.com/en-us/office/sequence-function-57467a98-57e0-4817-9f14-2eb78519ca90?ui=en-us&rs=en-us&ad=us\",\"WARC-Payload-Digest\":\"sha1:N3BKRDH5YBJ5DTFRWMU5LOJX2QLNUM5U\",\"WARC-Block-Digest\":\"sha1:NEMX525V65C3JMBCYEX7N67RVJOAE6T2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100873.6_warc_CC-MAIN-20231209071722-20231209101722-00384.warc.gz\"}"} |
https://metanumbers.com/1176 | [
"# 1176 (number)\n\n1,176 (one thousand one hundred seventy-six) is an even four-digits composite number following 1175 and preceding 1177. In scientific notation, it is written as 1.176 × 103. The sum of its digits is 15. It has a total of 6 prime factors and 24 positive divisors. There are 336 positive integers (up to 1176) that are relatively prime to 1176.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 4\n• Sum of Digits 15\n• Digital Root 6\n\n## Name\n\nShort name 1 thousand 176 one thousand one hundred seventy-six\n\n## Notation\n\nScientific notation 1.176 × 103 1.176 × 103\n\n## Prime Factorization of 1176\n\nPrime Factorization 23 × 3 × 72\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 6 Total number of prime factors rad(n) 42 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 1,176 is 23 × 3 × 72. Since it has a total of 6 prime factors, 1,176 is a composite number.\n\n## Divisors of 1176\n\n1, 2, 3, 4, 6, 7, 8, 12, 14, 21, 24, 28, 42, 49, 56, 84, 98, 147, 168, 196, 294, 392, 588, 1176\n\n24 divisors\n\n Even divisors 18 6 3 3\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 24 Total number of the positive divisors of n σ(n) 3420 Sum of all the positive divisors of n s(n) 2244 Sum of the proper positive divisors of n A(n) 142.5 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 34.2929 Returns the nth root of the product of n divisors H(n) 8.25263 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 1,176 can be divided by 24 positive divisors (out of which 18 are even, and 6 are odd). The sum of these divisors (counting 1,176) is 3,420, the average is 14,2.5.\n\n## Other Arithmetic Functions (n = 1176)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 336 Total number of positive integers not greater than n that are coprime to n λ(n) 84 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 196 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 336 positive integers (less than 1,176) that are coprime with 1,176. And there are approximately 196 prime numbers less than or equal to 1,176.\n\n## Divisibility of 1176\n\n m n mod m 2 3 4 5 6 7 8 9 0 0 0 1 0 0 0 6\n\nThe number 1,176 is divisible by 2, 3, 4, 6, 7 and 8.\n\n## Classification of 1176\n\n• Refactorable\n• Abundant\n\n### Expressible via specific sums\n\n• Polite\n• Practical\n• Non-hypotenuse\n\n• Triangle\n\n• Frugal\n\n## Base conversion (1176)\n\nBase System Value\n2 Binary 10010011000\n3 Ternary 1121120\n4 Quaternary 102120\n5 Quinary 14201\n6 Senary 5240\n8 Octal 2230\n10 Decimal 1176\n12 Duodecimal 820\n20 Vigesimal 2ig\n36 Base36 wo\n\n## Basic calculations (n = 1176)\n\n### Multiplication\n\nn×y\n n×2 2352 3528 4704 5880\n\n### Division\n\nn÷y\n n÷2 588 392 294 235.2\n\n### Exponentiation\n\nny\n n2 1382976 1626379776 1912622616576 2249244197093376\n\n### Nth Root\n\ny√n\n 2√n 34.2929 10.5553 5.85601 4.11227\n\n## 1176 as geometric shapes\n\n### Circle\n\n Diameter 2352 7389.03 4.34475e+06\n\n### Sphere\n\n Volume 6.81256e+09 1.7379e+07 7389.03\n\n### Square\n\nLength = n\n Perimeter 4704 1.38298e+06 1663.12\n\n### Cube\n\nLength = n\n Surface area 8.29786e+06 1.62638e+09 2036.89\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 3528 598846 1018.45\n\n### Triangular Pyramid\n\nLength = n\n Surface area 2.39538e+06 1.91671e+08 960.2\n\n## Cryptographic Hash Functions\n\nmd5 a7d8ae4569120b5bec12e7b6e9648b86 a8f3809c6de28975c83009f6e2911b81f95e5b5a 74e9f3d8efbda803994e08efba32440782235900ee20c261ff17b60e69a3d347 0d5ec3ba07572e17a650ba3257044565eb666db6b98850f11b2223c74404028a8375473936583f617288658e821b647272790c94ca8f40be562959189c6d9969 1193a198304102146567d58d899789646df55124"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6279686,"math_prob":0.98354137,"size":4544,"snap":"2023-14-2023-23","text_gpt3_token_len":1804,"char_repetition_ratio":0.12356828,"word_repetition_ratio":0.020057306,"special_character_ratio":0.44586268,"punctuation_ratio":0.07835821,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99610406,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-02T05:26:33Z\",\"WARC-Record-ID\":\"<urn:uuid:559f8308-c1e4-4b61-8e5d-973e0eb27350>\",\"Content-Length\":\"41307\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2a1a45bd-ac33-4b02-b8ff-fc9ca63e78da>\",\"WARC-Concurrent-To\":\"<urn:uuid:87fc8bcb-ef25-4bae-b6bf-cad4b4997652>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/1176\",\"WARC-Payload-Digest\":\"sha1:AMM2G4Z2G6WPWE6AJ5UM7JPQGHMA735U\",\"WARC-Block-Digest\":\"sha1:5JKXDDQ4YL3QBSACAYZQ65LZFM3PQVMC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950383.8_warc_CC-MAIN-20230402043600-20230402073600-00134.warc.gz\"}"} |
https://pillowlab.wordpress.com/tag/state-space-models/ | [
"# Lab meeting 6/27/2011\n\nThis week I followed up on the previous week’s meeting about state-space models with a tutorial on Kalman filtering / smoothing. We started with three Gaussian “fun facts” about linear transformations of Gaussian random variables and products of Gaussian densities. Then we derived the Kalman filtering equations, the EM algorithm, and discussed a simple implementation of Kalman smoothing using sparse matrices and the “backslash” operator in matlab.\n\nHere’s how to do Kalman smoothing in one line of matlab:\nXmap = (Qinv + speye(nsamps) / varY) \\ (Y / varY + Qinv * muX);\n\nwhere the latent variable X has prior mean muX and inverse covariance Qinv, and Y | X is Gaussian with mean X and variance varY * I. Note Qinv is tri-diagonal and can be formed with a single call to “spdiags”.\n\n# Lab meeting 6/20/2011\n\nToday I presented a paper from Liam’s group: “A new look at state-space models for neural data”, Paninski et al, JCNS 2009\n\nThe paper presents a high-level overview of state-space models for neural data, with an emphasis on statistical inference methods. The basic setup of these models is the following:\n\n• Latent variable",
null,
"$Q$ defined by dynamics distribution:",
null,
"$P(q_{t+1}|q_t)$\n• Observed variable",
null,
"$Y$ defined by observation distribution:",
null,
"$P(y_t | q_t)$.\n\nThese two ingredients ensure that the joint probability of latents and observed variables is",
null,
"$P(Q,Y) = P(q_1 ) P(y_1|q_1) \\prod_{t=2}^T P(y_t | q_t) P(q_{t}|q_{t-1})$.\nA variety of applications are illustrated (e.g.,",
null,
"$Q$ = common input noise;",
null,
"$Y$ = multi-neuron spike trains).\n\nThe two problems we’re interested in solving, in general, are:\n(1) Filtering / Smoothing: inferring",
null,
"$Q$ from noisy observations",
null,
"$Y$, given the model parameters",
null,
"$\\theta$.\n(2) Parameter Fitting: inferring",
null,
"$\\theta$ from observations",
null,
"$Y$.\n\nThe “standard” approach to these problems involves: (1) recursive approximate inference methods that involve updating a Gaussian approximation to",
null,
"$P(q_t|Y)$ using its first two moments; and (2) Expectation-Maximization (EM) for inferring",
null,
"$\\theta$. By contrast, this paper emphasizes: (1) exact maximization for",
null,
"$Q$, which is tractable in",
null,
"$O(T)$ via Newton’s Method, due to the banded nature of the Hessian; and (2) direct inference for",
null,
"$\\theta$ using the Laplace approximation to",
null,
"$P(Y|\\theta)$. When the dynamics are linear and the noise is Gaussian, the two methods are exactly the same (since a Gaussian’s maximum is the same as its mean; the forward and backward recursions in Kalman Filtering/Smoothing are the same set of operations needed by Newton’s method). But for non-Gaussian noise or non-linear dynamics, the latter method may (the paper argues) provide much more accurate answers with approximately the same computational cost.\n\nKey ideas of the paper are:\n\n• exact maximization of a log-concave posterior\n•",
null,
"$O(T)$ computational cost, due to sparse (tridiagonal or banded) Hessian.\n• the Laplace approximation (Gaussian approximation to the posterior using its maximum and second-derivative matrix), which is (more likely to be) justified for log-concave posteriors\n• log-boundary method for constrained problems (which preserves sparsity)\n\nNext week: we’ll do a basic tutorial on Kalman Filtering / Smoothing (and perhaps, EM)."
] | [
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87155086,"math_prob":0.9868777,"size":2982,"snap":"2021-31-2021-39","text_gpt3_token_len":645,"char_repetition_ratio":0.109133646,"word_repetition_ratio":0.0,"special_character_ratio":0.20791416,"punctuation_ratio":0.10404624,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99921626,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T12:57:43Z\",\"WARC-Record-ID\":\"<urn:uuid:1a1e2286-5950-4a0d-9238-9835361f304f>\",\"Content-Length\":\"52329\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4b2325e4-0a16-41e8-a9e5-2f6ab55eb3b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a69add3-7f2a-4e55-bcf4-1c9990878b75>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://pillowlab.wordpress.com/tag/state-space-models/\",\"WARC-Payload-Digest\":\"sha1:JCMSJTN32IVEO6TVVJNU5ZFDWEPGRWJV\",\"WARC-Block-Digest\":\"sha1:V5HEM5RCEZULIVKS36QVEOVEBHQZ46BA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153729.44_warc_CC-MAIN-20210728123318-20210728153318-00648.warc.gz\"}"} |
https://calcforme.com/percentage-calculator/what-is-322-percent-of-59 | [
"# What is 322% of 59?\n\n## 322 percent of 59 is equal to 189.98\n\n%\n\n322% of 59 equal to 189.98\n\nCalculation steps:\n\n( 322 ÷ 100 ) x 59 = 189.98\n\n### Calculate 322 Percent of 59?\n\n• F\n\nFormula\n\n(322 ÷ 100) x 59 = 189.98\n\n• 1\n\nPercent to decimal\n\n322% to decimal is 322 ÷ 100 = 3.22\n\n• 2\n\nMultiply decimal with the other number\n\n3.22 x 59 = 189.98\n\nExample"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.61308104,"math_prob":0.9999765,"size":228,"snap":"2022-27-2022-33","text_gpt3_token_len":92,"char_repetition_ratio":0.19196428,"word_repetition_ratio":0.0,"special_character_ratio":0.55263156,"punctuation_ratio":0.13207547,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99968743,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T05:15:45Z\",\"WARC-Record-ID\":\"<urn:uuid:23e115d1-30fb-48ff-97aa-964017a37d55>\",\"Content-Length\":\"14836\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e978c7fd-58cc-40a1-aa7b-e6bc1281c02a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8c99d86-3fde-4b57-86f2-b0caa4c5d3d1>\",\"WARC-IP-Address\":\"76.76.21.241\",\"WARC-Target-URI\":\"https://calcforme.com/percentage-calculator/what-is-322-percent-of-59\",\"WARC-Payload-Digest\":\"sha1:BUVT2DWLHF3JRQR2GGSJ53O4RUNP23PC\",\"WARC-Block-Digest\":\"sha1:GTCO7DFT4ZCNRBZKJ2IBB223DGNUP6HR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104215790.65_warc_CC-MAIN-20220703043548-20220703073548-00612.warc.gz\"}"} |
https://www.omnicalculator.com/physics/conservation-of-momentum | [
"# Conservation of Momentum Calculator\n\nCreated by Bogna Szyk and Steven Wooding\nReviewed by Jack Bowater\nLast updated: Dec 22, 2022\n\nPrefer watching rather than reading? Check out our video lesson on the conservation of momentum here:\n\n## Law of conservation of momentum\n\nThe principle of momentum conservation says that for an isolated system, the sum of the momentums of all objects is constant (it doesn't change). An isolated system is a system of objects (it can be, and typically is, more than one body) that don't interact with anything outside the system. In such a system, no momentum disappears: whatever is lost by one object is gained by the other.\n\nImagine two toy cars on a table. Let's assume they form an isolated system - no external force acts on them, and the table is frictionless. One of the cars moves at a constant speed of 3 km/h and hits the second toy car (that remained stationary), causing it to move. You can observe that the first car visibly slows down after the collision. This result happened because some momentum was transferred from the first car to the second car.\n\n## Elastic and inelastic collisions\n\nThe main difference between the types of momentum is related to how the kinetic energy of the system behaves. If this type of energy is not familiar to you, you may be interested in looking at our kinetic energy calculator article and understanding it before digging into the types of collisions.\n\nWe can distinguish three types of collisions:\n\n• Perfectly elastic: In an elastic collision, both the momentum and kinetic energy of the system are conserved. Bodies bounce off each other. An excellent example of such a collision is between hard objects, such as marbles or billiard balls.\n• Partially elastic: In such a collision, momentum is conserved, and bodies move at different speeds, but kinetic energy is not conserved. It does not mean that it disappears, though; some of the energy is utilized to perform work (such as creating heat or deformation). A car crash is an example of a partially elastic collision - metal gets deformed, and some kinetic energy is lost.\n• Perfectly inelastic: After an inelastic collision, bodies stick together and move at a common speed. Momentum is conserved, but some kinetic energy is lost. For example, when a fast-traveling bullet hits a wooden target, it can get stuck inside the target and keep moving with it.\n\nYou may notice that while the law of conservation of momentum is valid in all collisions, the sum of all objects' kinetic energy changes in some cases. The potential energy, however, stays the same (which is in line with the potential energy formula).\n\n## How to use the conservation of momentum calculator\n\nYou can use our conservation of momentum calculator to consider all cases of collisions. To calculate the velocities of two colliding objects, simply follow these steps:\n\n1. Enter the masses of the two objects. Let's assume that the first object has a mass of 8 kg while the second one weighs 4 kg.\n2. Decide how fast the objects are moving before the collision. For example, the first object may move at a speed of 10 m/s while the second one remains stationary (speed = 0 m/s).\n3. Determine the final velocity of one of the objects. For example, we know that after the collision, the first object will slow down to 4 m/s.\n4. Calculate the momentum of the system before the collision. In this case, the initial momentum is equal to 8 kg * 10 m/s + 4 kg * 0 m/s = 80 N·s.\n5. According to the law of conservation of momentum, total momentum must be conserved. The final momentum of the first object is equal to 8 kg * 4 m/s = 32 N·s. To ensure no losses, the second object must have momentum equal to 80 N·s - 32 N·s = 48 N·s, so its speed is equal to 48 Ns / 4 kg = 12 m/s.\n6. You can also open the advanced mode to see how the system's kinetic energy changed and determine whether the collision was elastic, partially elastic, or inelastic.\n\n## FAQ\n\n### What is the principle of conservation of momentum?\n\nAccording to the principle of conservation of momentum, the total linear momentum of an isolated system, i.e., a system for which the net external force is zero, is constant.\n\n### Under what circumstances is momentum conserved?\n\nIn order to conserve momentum, there should be no net external force acting on the system. If the net external force is not zero, momentum is not conserved.\n\n### What is an example of the conservation of momentum?\n\nThe recoil of a gun when we fire a bullet from it is an example of the conservation of momentum. Both the bullet and the gun are at rest before the bullet is fired. When the bullet is fired, it moves in the forward direction. The gun moves in the backward direction to conserve the total momentum of the system.\n\n### What is the principle that makes a rocket move?\n\nThe principle that makes a rocket move is the law of conservation of linear momentum. The fuel burnt in the rocket produces hot gas. The hot gas is ejected from the exhaust nozzle and goes in one direction. The rocket goes in the opposite direction to conserve momentum.\n\nBogna Szyk and Steven Wooding",
null,
"Collision type\nDon't know\nObject no. 1\nMass (m₁)\nkg\nInitial velocity (u₁)\nm/s\nFinal velocity (v₁)\nm/s\nIf you are trying to solve a problem that only has one object, maybe our impulse and momentum calculator would be more useful.\nObject no. 2\nMass (m₂)\nkg\nInitial velocity (u₂)\nm/s\nFinal velocity (v₂)\nm/s\nPeople also viewed…\n\nAddiction calculator tells you how much shorter your life would be if you were addicted to alcohol, cigarettes, cocaine, methamphetamine, methadone, or heroin.\n\n### Black hole collision\n\nThe Black Hole Collision Calculator lets you see the effects of a black hole collision, as well as revealing some of the mysteries of black holes, come on in and enjoy!\n\n### Magnetic dipole moment\n\nCalculate the magnetic dipole moment of a current-carrying loop or a solenoid with our magnetic dipole moment calculator.\n\n### Thermodynamic processes\n\nTry this combined gas law calculator to determine the basic properties of the most common thermodynamic processes.",
null,
""
] | [
null,
"https://uploads-cdn.omnicalculator.com/images/sw-con-of-momentum-equation.png",
null,
"data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIHdpZHRoPSI5OTk5cHgiIGhlaWdodD0iOTk5OXB4IiB2aWV3Qm94PSIwIDAgOTk5OSA5OTk5IiB2ZXJzaW9uPSIxLjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiPjxkZWZzPjxyYWRpYWxHcmFkaWVudCBpZD0id2hpdGUiIGN4PSI1MCUiIGN5PSI1MCUiIHI9IjUwJSIgZng9IjUwJSIgZnk9IjUwJSI+PHN0b3Agb2Zmc2V0PSIwJSIgc3R5bGU9InN0b3AtY29sb3I6cmdiKDI1NSwyNTUsMjU1KTtzdG9wLW9wYWNpdHk6MC4wMSIgLz48c3RvcCBvZmZzZXQ9IjEwMCUiIHN0eWxlPSJzdG9wLWNvbG9yOnJnYigyNTUsMjU1LDI1NSk7c3RvcC1vcGFjaXR5OjAiIC8+PC9yYWRpYWxHcmFkaWVudD48L2RlZnM+PGcgc3Ryb2tlPSJub25lIiBmaWxsPSJ1cmwoI3doaXRlKSIgZmlsbC1vcGFjaXR5PSIxIj48cmVjdCB4PSIwIiB5PSIwIiB3aWR0aD0iOTk5OSIgaGVpZ2h0PSI5OTk5Ij48L3JlY3Q+PC9nPjwvc3ZnPg==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91719884,"math_prob":0.9406377,"size":5449,"snap":"2022-40-2023-06","text_gpt3_token_len":1150,"char_repetition_ratio":0.18126722,"word_repetition_ratio":0.025157232,"special_character_ratio":0.21049733,"punctuation_ratio":0.100283824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98468447,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T08:09:06Z\",\"WARC-Record-ID\":\"<urn:uuid:882c2f3c-eb97-4813-8bff-34ab213abbb4>\",\"Content-Length\":\"509718\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:79297b4e-50c7-4c6a-8e10-2d254dfef0aa>\",\"WARC-Concurrent-To\":\"<urn:uuid:8cbff740-08fb-412b-ac3f-38f358b47e57>\",\"WARC-IP-Address\":\"68.70.205.2\",\"WARC-Target-URI\":\"https://www.omnicalculator.com/physics/conservation-of-momentum\",\"WARC-Payload-Digest\":\"sha1:72FNW7RW5HP5HNKHPZHTG3YHR72HCTAO\",\"WARC-Block-Digest\":\"sha1:MAKLPERHQQZTQOG7JDVCKEAYBWGQZMD4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499804.60_warc_CC-MAIN-20230130070411-20230130100411-00656.warc.gz\"}"} |
https://mathematica.stackexchange.com/questions/linked/14160?sort=unanswered | [
"20 questions linked to/from Plotting a Phase Portrait\n5k views\n\n### Phase space vector field [duplicate]\n\nI have a system of non linear equations and from NDSolve I get the solution. I plot the phase space with ...\n193 views\n\n### Phase Portrait for ODE with IVP\n\nI'm trying to make a phase portrait for the ODE x'' + 16x = 0, with initial conditions x=-1 & x'=0. I know how to solve the ODE and find the integration constants; the solution comes out to ...\n790 views\n\n### Plotting geodesics of upper half plane\n\nI want to solve (numerically) the geodesics of the upper half plane and plot. The results are quite known (i) straight lines parallel to $y$-axis and (ii) semicircles centered on the $x$-axis. Now the ...\n171 views\n\n### How to plot the phase portrait of a second-order differential equation?\n\nI am relatively very new to mathematica, but I have been trying to figure this out for hours. I have a second-order differential equation, $$ay''+by'+cy=0$$ and I have converted it into a first-...\n3k views\n\n### Rosenzweig-MacArthur predator-prey model [duplicate]\n\nThe predator-prey model is governed by the following system of ode's. \\begin{eqnarray} &&\\displaystyle{\\frac{dx}{dt}=r x\\left(1 - \\frac{x}{K}\\right) - \\frac{s y x}{1 + s \\tau x}},\\\\[0.1cm] &...\n835 views\n\n### How to plot Van der Pol equation's limit cycle? [duplicate]\n\nThe Van der Pol equation is $$y''-\\mu (1-y^2)y'+y=0, \\,\\, \\mbox{where}\\,\\, \\mu ≥ 0.$$ Use the Runge-Kutta Method to plot the limit cycle and some solutions in the phase plane (the $yy'$-plane) ...\n730 views\n\n### Is there a Mathematica version of ODE tools pplane and dfield?\n\nThe toolkits dfield and pplane are staples in ODE courses. First in MATLAB, now in Java. Do they have a Mathematica expression? Sample outputs follow: @Michael E2 Your suggestions were great. Here ...\n1k views\n\n### Plotting simple ODE system phase portrait [duplicate]\n\nI am new to Mathematica, please help me to handle the following problem: I have the following system of equations: \\begin{cases} \\dot x = 2xy \\\\ \\dot y = 1 - x^2 - y^2 \\end{cases} What I want is to ...\n447 views\n\n### Projection of a 3D ODE solution on a parametric 2D streamplot\n\nProblem I have a third-order dynamical system of which I'd like to plot the solutions on the streamplot defined by the same dynamical system, as a function of one of the three variables. These are ...\n177 views\n\n### Phase portrait plotting\n\nI'm trying to plot a phase portrait for the equation: (d^2/dt^2)y + b * (dy/dt)^2 = A, A > 0, b < 0 The first thing I did was changing ...\n649 views\n\n### How can I create a phase plane? [duplicate]\n\nI want to make a phase plane for an equation system. I want it to look something like a Mu-Space in Stat-Mech. Basically I want it to show the same system with various different initial condition ...\n4k views\n\n### How can I draw this particular phase diagram in Mathematica? [duplicate]\n\nSketch the phase diagram for systems with the following velocity functions where a and b are constants with ...\n188 views\n\n### EquationTrekker-like behavior for state space?\n\nEquationTrekker is great for phase space plots, however I want to plot the results of \\phi '(t)=-b \\sin (\\phi (t))+g \\sin (\\Phi (t)-\\phi (t))+1\\\\\\Phi '(t)=g y \\...\nMy question stems from exercise 4.3.3 in Murdock's book \"Pertubations: Theory and Methods\". I am asked in the following: Consider the problem $y''+y=\\epsilon y^2$ $y(0)=\\alpha$, $y'(0)=0$. Draw ..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87437946,"math_prob":0.9592928,"size":3874,"snap":"2020-45-2020-50","text_gpt3_token_len":1085,"char_repetition_ratio":0.13281654,"word_repetition_ratio":0.0030534351,"special_character_ratio":0.27749097,"punctuation_ratio":0.117048346,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99904597,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T09:50:48Z\",\"WARC-Record-ID\":\"<urn:uuid:dcdcad1f-f584-4d6d-9a9b-cff3d623a807>\",\"Content-Length\":\"132640\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1010ac41-436d-44c6-8c98-54d0b973189b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3e959cf-ffce-4396-b6da-42c7e1ceedb7>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/linked/14160?sort=unanswered\",\"WARC-Payload-Digest\":\"sha1:2P2DXQ5APRN7NE7ZC7URS7NZLE73PEZV\",\"WARC-Block-Digest\":\"sha1:YMSUNYKWGD657NAPNABM7TC4V7OKNSQC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107879362.3_warc_CC-MAIN-20201022082653-20201022112653-00469.warc.gz\"}"} |
https://www.jpost.com/israel/natorei-kartas-hirsch-dies-in-jerusalem | [
"(function (a, d, o, r, i, c, u, p, w, m) { m = d.getElementsByTagName(o), a[c] = a[c] || {}, a[c].trigger = a[c].trigger || function () { (a[c].trigger.arg = a[c].trigger.arg || []).push(arguments)}, a[c].on = a[c].on || function () {(a[c].on.arg = a[c].on.arg || []).push(arguments)}, a[c].off = a[c].off || function () {(a[c].off.arg = a[c].off.arg || []).push(arguments) }, w = d.createElement(o), w.id = i, w.src = r, w.async = 1, w.setAttribute(p, u), m.parentNode.insertBefore(w, m), w = null} )(window, document, \"script\", \"https://95662602.adoric-om.com/adoric.js\", \"Adoric_Script\", \"adoric\",\"9cc40a7455aa779b8031bd738f77ccf1\", \"data-key\");\nvar domain=window.location.hostname; var params_totm = \"\"; (new URLSearchParams(window.location.search)).forEach(function(value, key) {if (key.startsWith('totm')) { params_totm = params_totm +\"&\"+key.replace('totm','')+\"=\"+value}}); var rand=Math.floor(10*Math.random()); var script=document.createElement(\"script\"); script.src=`https://stag-core.tfla.xyz/pre_onetag?pub_id=34&domain=\\${domain}&rand=\\${rand}&min_ugl=0\\${params_totm}`; document.head.append(script);",
null,
""
] | [
null,
"https://images.jpost.com/image/upload/f_auto,fl_lossy/c_fill,g_faces:center,h_537,w_822/142353",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9801133,"math_prob":0.96919554,"size":924,"snap":"2023-14-2023-23","text_gpt3_token_len":209,"char_repetition_ratio":0.08913043,"word_repetition_ratio":0.0,"special_character_ratio":0.17207792,"punctuation_ratio":0.09876543,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789482,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T16:08:23Z\",\"WARC-Record-ID\":\"<urn:uuid:bf8ed9a7-3fc9-45a6-b6ea-bb997ac965b1>\",\"Content-Length\":\"81046\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63813696-b76f-46b7-9ba2-399324e12594>\",\"WARC-Concurrent-To\":\"<urn:uuid:a0964f9c-3738-4b2f-aae5-d2318003d4b9>\",\"WARC-IP-Address\":\"159.60.130.79\",\"WARC-Target-URI\":\"https://www.jpost.com/israel/natorei-kartas-hirsch-dies-in-jerusalem\",\"WARC-Payload-Digest\":\"sha1:2RCHONS4KB2AD6MDPCSQXZW7LC3R6NX5\",\"WARC-Block-Digest\":\"sha1:Z7VR3CU3U6UL4NQFBTQVD4LFYWRNFE3E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224657720.82_warc_CC-MAIN-20230610131939-20230610161939-00316.warc.gz\"}"} |
https://warwick.ac.uk/fac/sci/maths/people/staff/harper/ | [
"# Mathematics Institute",
null,
"Office: C2.28\nPhone: +44 (0)24 765 23542\nEmail: [email protected]\n\nTeaching Responsibilities 2020/21:\n\nTerm 1: MA4L6 Analytic Number Theory\n\nResearch Interests:\n\nI am a number theorist, and am particularly interested in analytic, combinatorial and probabilistic number theory. I also enjoy learning about more general topics in analysis, combinatorics, probability and statistics.\nThus far, my research has dealt with a selection of problems in probability and probabilistic number theory, including the behaviour of random multiplicative functions, multiplicative chaos, extreme values of Gaussian processes, and applications to moments of character sums and Dirichlet polynomials, and to the Shanks--Rényi prime number race between residue classes; with the distribution and applications of smooth numbers (that is numbers without large prime factors); with the behaviour of the Riemann zeta function on the critical line, both conjecturally and rigorously; with some additive combinatorics questions connected with sieve theory; and with estimating various sums of general deterministic multiplicative functions.\n\nLecture Notes:\n\nLecture notes for courses I am currently teaching in Warwick may be accessed by following the relevant links above.\n\nHere are the notes for some Part III (fourth year) number theory courses that I lectured in Cambridge.\n\nProbabilistic Number Theory (Michaelmas term 2015). Chapter 0. Chapter 1. Chapter 2. Chapter 3.\n\nElementary Methods in Analytic Number Theory (Lent term 2015). Chapter 0. Chapter 1. Chapter 2. Chapter 3.\n\nThe Riemann Zeta Function (Lent term 2014). Chapter 0. Chapter 1. Chapter 2. Chapter 3.\n\nOther Notes:\n\nThese informal notes aren't intended for publication, so aren't extremely polished, but people have occasionally asked me for copies of them. I hope they are accurate and of some interest.\n\nA different proof of a finite version of Vinogradov's bilinear sum inequality. (pdf link) This is a 3 page note giving a different proof of a bilinear sum inequality from a paper of Bourgain, Sarnak and Ziegler. The new proof exploits a classical kind of result from probabilistic number theory, namely that a certain divisor sum (additive function) is \"close to constant on average\" (i.e. has small variance). See Terence Tao's blog post for some more discussion of this topic.\n\nA version of Baker's theorem on linear forms in logarithms. (pdf link) These are fairly brief notes that I wrote when giving an expository talk about Baker's results on linear forms in logarithms. The notes should be thought of as giving a moderately detailed sketch proof, where my aim was to motivate the various steps of Baker's argument. When I gave the talk, the consensus was that one should think of the argument (constructing an auxiliary function) in the same spirit as the \"polynomial method\" from combinatorics.\n\nMost relevant recent publications:\n\nWorkshops/conferences/seminars:\n\nRecent research grants:\n\nRecent awards & prizes:\n\nPersonal Homepage:"
] | [
null,
"https://warwick.ac.uk/fac/sci/maths/people/staff/grey_head.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9083326,"math_prob":0.8038593,"size":3002,"snap":"2022-27-2022-33","text_gpt3_token_len":652,"char_repetition_ratio":0.11174116,"word_repetition_ratio":0.03982301,"special_character_ratio":0.19986676,"punctuation_ratio":0.12547529,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96808434,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-07T19:26:55Z\",\"WARC-Record-ID\":\"<urn:uuid:d8a98e65-ce77-4744-ba30-f9c9e44fe1d9>\",\"Content-Length\":\"33164\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ce980e6-6a30-4653-8ecd-7a33a3db059f>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9ab2746-ae44-45f2-8db0-5c16d76b04a2>\",\"WARC-IP-Address\":\"137.205.28.41\",\"WARC-Target-URI\":\"https://warwick.ac.uk/fac/sci/maths/people/staff/harper/\",\"WARC-Payload-Digest\":\"sha1:EZ6XUYGLLCREA6PKFHK2LTGRUOVZV3HR\",\"WARC-Block-Digest\":\"sha1:HU6PAMUVN27CIJOE2TYIDCNBCQ2TFHWH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570692.22_warc_CC-MAIN-20220807181008-20220807211008-00038.warc.gz\"}"} |
http://www.sjsu.edu/faculty/watkins/compstat.htm | [
"San José State University\nDepartment of Economics\n\napplet-magic.com\nThayer Watkins\nSilicon Valley\nUSA\n\n Comparative Statics Analysis in Economics\n\nMost of economic theory consists of comparative statics analysis. Comparative Statics is the determination of the changes in the endogenous variables of a model that that will reusult from a change in the exogenous variables or parameters of that model. A crucial bit of information is the sign of the changes in the endogenous variables.\n\nThere is very limited opportunity to establish the signs of the impacts of changes in macroeconomics or any field that does not have an explicit maximization or minimization operation involved. But in microeconomics comparative statics is a powerful tool for establishing important deductions of theories.\n\nFirst consider the case without maximization or minimization being involved, such as occurs in macroeconomics. The simplest case is situation in which one variable y is determined by some variable x. Suppose the value of y is determined as the solution to an equation,\n\n#### f(x,y) = 0\n\nThis equation holds for all valuees of x so it holds that the differential dy and dx satisfy the equation\n\n#### (∂f/∂y)dy + (∂f/∂x)dx = 0 or equivalently fydy + fxdx = 0\n\nThis relationship can be solved for dy; i.e.,\n\n#### dy = - (fx/fy)dx\n\nIn order to know the sign of the impact of a change in x on y we need to know the signs of both derivatives, fx and fy.\n\nNow consider the case in which y is determined such as to maximize some function g(x,y), where x has a value outside of the control of the decision maker. The first order condition for g(x,y) to be a maximum with respect to y is:\n\n#### ∂g/∂y = 0 or equivalently gy = 0\n\nThe second order condition is that:\n\n#### gyy > 0\n\nIf the value of x changes then\n\n#### gyydy + gyxdx 0 so dy = - (gyx/yy)dx\n\nWe know because y is chosen so as to mazimize g that the second order condition requires that gyy > 0. The sign of the impact of a change in x on y depends only upon the sign of gyx.\n\nExample 1:\n\nConsider a comparative statics analysis of monopoly pricing for a monopolist facing a market with a demand function of the form:\n\n#### Q = N(ay - bp)\n\nwhere N is the population in the market area, y is the percapita disposable income and p is the price of the product. a and b are positive parameters.\n\nThe total cost C for the firm is given by:\n\n#### C = F + cQ\n\nwhere F is fixed cost and c is the constant marginal variable cost.\n\nA comparative statics analysis tells how the monopoly price would be affected by changes in the exogenous variables N and y and in the parameters F and c.\n\nFrom the demand function Q = N(ay - bp), the inverse demand function (price as a function of quantity sold) is\n\n#### p = (a/b)y - Q/bN\n\nThe profit function for the monopolist is then\n\n#### Π = pQ - F - cQ = [(a/b)y - Q/bN]Q - F -cQ\n\nThe first order condition for a maximum is\n\n#### dΠ/dQ = (a/b) - 2Q/bN - c = 0 which means the critical value of Q is Q = N(a-bc)/2\n\nwhich says that a meaningful solution exists only if (a-bc)>0.\n\nThe second order condition for a maximum is\n\n#### d2Π/dQ2 < 0 which reduces to -2/bN < 0\n\nSince b and N are postivie the second order condition is automaticallly satisfied.\n\nThe comparative statics results can be determined in this case by simply differentiating the first order condition with respect to the parameters; i.e.,\n\n• ∂Q/∂a = N/2 which is positive\n• ∂Q/∂b = -Nc/2 which is negative\n• ∂Q/∂c = -Nb/2 which is negative\n• ∂Q/∂F = 0\n• ∂Q/∂N = (a-bc)/2 which is positive\n\nExample 2:\n\nIn the above example the second order condition was automatically satisfied. Now suppose the cost function is\n\n#### C = F + cQ - eQ2 + fQ3\n\nThis is the case of U-shaped marginal and average costs.\n\nIn this case the first and second conditons for a profit maximum reduce to:\n\n#### (a/b) - 2Q/bN - c + 2eQ - 3Q2 = 0 -2/bN +2e - 6Q < 0\n\nThe second order condition is satisfied only if\n\n#### Q > (e - 1/bN)/3\n\nThe first order condition is a quadratic equation in Q. It will have two solutions. One solution will be for a profit minimum and the other for a profit maximum. The solution that is greater than (e-1/bN)/3 will be for the profit maximum.\n\nThe partial derivative of the first order condition with respect to a is\n\n#### 1/b - (2/bN - 2e + 6Q)∂Q/∂a = 0 thus ∂Q/∂a = (1/b)/[2/bN - 2e + 6Q)] = (1/b)/[2(3Q - (e-1/bN))]\n\nThe denominator of the fraction involves positive and negative terms so without further information it would not be possible to determine the sign of the ratio. But the second order condition tells us that 3Q>(e-1/bN) so the numerator has to be positive and thus the ratio is positive. Therefore (∂Q/∂a)>0. Likewise the signs of the effects of changes in the other parameters can be determined.\n\nNow consider a couple of cases in which the economic variables are not determined from an optimization procedure.\n\nIt should be noted that when variables are not determined by the results of optimization less can be said about the sign of the comparative statics effects.\n\nExample 3:\n\nAn important application of comparative statics analysis is in macroeconomics. This is a nonoptimizing application so the opportunity to make theoretical deductions as to the sign of the impact of changes in the exogenous variables is more limited.\n\nA macroeconomic model is given in terms of a set of equations. The simplest macroeconomic model is the following in which\n\n• Y = national output, GDP\n• C = consumption\n• I = investment\n• G = government purchases\n• NX = net exports (exports - imports).\n\nThe model is then:\n\n#### Y = C + I + G + NX (an equilibrium condition) C = a + bY (the consumption function, a behavioral relation)with I, G and NX exogenous and a and b parameters.\n\nThis model is so simple it hardly requires any analysis. It can be solved explicitly for the endogenous variables Y and C; i.e.,\n\n#### Y = k(a + I + G + NX) C = a(1+kb) + kb(I + G + NX) where the multiplier k = 1/(1-b).\n\nDespite the simplicity of the above model it is worthwhile going through the general procedure which would have to be applied to more complicated models. First we need the differential form of the model, which in the above case is:\n\n#### dY = dC + dI + dG + dNX dC = bdY\n\nThe next step is put all the exogenous variables, in this case the differentials of Y and C, on the left side of the equations, leaving the right side for the differentials of the exogenous variables; i.e.,\n\n#### dY - dC = dI + dG + dNX -bdY + dC = 0\n\nThen the necssarily linear equations for the differentials are written as a matrix equation.\n\n | 1 -1 | | -b 1 |\n .\n | dY | | dC |\n =\n | 1 1 1 | | 0 0 0 |\n .\n | dI | | dG | | dNX |\n\nIf the vector of differentials of the endogenous variables is denoted as dZ and the vector of differentials of the exogenous variables as dX then the matrix equations can be expressed in the form\n\n#### ΓdZ = BdX\n\nThe solution is then\n\n#### dZ = Γ-1BdX\n\n.\n\nThe comparative statics analysis consists of finding the elements of the matrix Γ-1B\n\nWhile the matrix formulation has certain advantages for the purpose of an introduction to comparative statics it is better to obtain the solutions to the system of equations by way of Cramer's Rule. Cramer's Rule says that the solutions for the dependent variable can be expressed as a ratio of determinants. The denominator of the ratio is the determinant of the matrix of coefficients of the dependent variables. The numerator is the determinant of the matrix constructed by replacing the column of the coefficient matrix by the column of the constants on the RHS of the system of equations.\n\nFor example, if the effect of a change in I on Y is sought, then in the above equations dG and dNX are set equal to 0. The system of equations is then\n\n | 1 -1 | | -b 1 |\n .\n | dY | | dC |\n =\n | dI | | 0 |\n\nTo obtain dY in terms of dI take the ratio of the determinants of two matrices. One matrix is the coefficient matrix\n\n | 1 1 | | -b 1 |\n\nNote that dY corresponds to the first column of the coefficient matrix so the other matrix is the above matrix with the first column replaced\n\n | dI 1 | | 0 1 |\n\nTheir determinants are (1-b) and dI, respectively, so the solution for dY by Cramer's Rule is\n\n#### dY = dI/(1-b) and hence ∂Y/∂I = 1/(1-b)\n\nThe value of 1/(1-b) is called the multiplier.\n\nLikewise\n\n#### ∂Y/∂G = 1/(1-b) and ∂Y/∂NX = 1/(1-b)\n\nThe numerator in the ratio for dC is\n\n | 1 dI | | -b 0 |\n\nand thus dC = bdI/(1-b) and hence\n\n#### ∂C/∂I = b/(1-b)\n\nThis is also the value for ∂C/∂G and ∂C/∂NX\n\nExample 3a:\n\nAn extension of the analysis for the above macroeconomic model is one which is the same as above except that\n\nThus\n\n#### dYD = dY − dT = dY + ds − tdY − Ydt which reduces to dYD = (1-t)dY + ds − Ydt\n\nIf there are no changes in the parameters a and b then the analysis is the same as the previous model with b replaced with b(1-t).\n\n(To be continued.)\n\nExample 4:\n\nThis example deals with the interesting aspect of exports and imports being money values rather than physical units so exports and imports are expenditures rather than quanitities.\n\nSuppose exports depend upon the exchange rate E. Let E be the number of foreign currency units per dollar, say 100 yen per dollar. Suppose the demand function for American timber by Japanese users is:\n\n#### Q = a - bP,\n\nwhere Q is in physical units per year, say board-feet/yr, and P is the price of timber in yen, say yen per board-foot. If p is the U.S. price of timber, \\$ per board-foot, the price to Japanese buyers is pE. Thus the physical quantity of timber sold as a function of E is\n\n#### Q = a - bpE.\n\nBut for macroeconomic analysis what is needed is the dollar value of the sales; i.e.\n\n#### pQ = pa - bp2E,\n\nThus the dollar value of the level of exports is negatively related to E; i.e.,\n\n#### X = pa - bp2E.\n\nThe comparative statics analysis for this case gives effects on the dollar value of exports of the various variables and parameters:\n\n• ∂X/∂a = p which is positive\n• ∂X/∂b = -p2E which is negative\n• ∂X/∂E = -b2 which is negative\n• ∂X/∂p = -2bpE which is negative\n\nExample 5:\n\nNow suppose we have the demand function for some import to the U.S., say laptop's from Japan,\n\n#### Q = a -bp,\n\nwhere Q is the number of laptops per year and p is the price of laptops in dollars. If the price of laptops in Japan is P yen then the price in dollars is P/E. Thus the relationship between physical units of imports and the exchange rate is\n\n#### Q = a -bP/E.\n\nBut again we want the dollar value of the imports, pQ rather than physical units. Therefore the level of imports is\n\n#### M = pQ = PQ/E = P(a -bP/E)/E = aP/E - bP/E2 ,\n\na more complicated relationship than occured in Example 3 for exports.\n\nNow consider the marginal effects on the dollar value of imports M of a change in the parameters of the demand function, the price of laptops in Japan and the exchange rate E.\n\n• ∂M/∂a = aP/E which is positive\n• ∂M/∂b = -P/E2 which is negative\n• ∂M/∂P = a/E which is positive\n• ∂M/∂E = -aP/E2 + 2bP/E3 which is ambiguous\n\nExample 5:\n\nAn interesting comparative statics problem can now be formulated making use of the ideas presented above. Suppose a Japanese producer has monopoly for television sets in the U.S. as well as Japan. It can set the price for TV's in Japan. Given the exchange rate E the price for TV's in the U.S. is then determined. Let the cost function be\n\n#### C = F -cQ\n\nConsider the following:\n\n• the level of profits in Yen for the TV monopolist\n• the profit maximizing prices in Japan and the U.S.\n• the marginal effects of a change in the exchange rate on the prices in Japan and the U.S.\n• the marginal effects on prices of changes in the fixed cost and the marginal variable cost\n\n(To be continued.)\n\nThe quintessential economics problem is constrained optimization. Likewise the most interesting comparative statics analysis involves constraints. Consider the problem of mazimizing utility with respect to the consumption of two goods, x1 and x1 subject to a budget constraint, p1x1 + p2x2 = Y. The first order conditions for such a constrained maximzation problem are:\n\n#### ∂U/∂x1 = λp1 and ∂U/∂x2 = λp2\n\nThe second order conditions are that the relevant bordered Hessian matrix is negative definite.\n\nNow consider changes in p1 and p2, say dp1 and dp2. and a change in consumer income y, say dY. As a result of the changes in the parameters the rates of consumption will undergo some infintesimal changes, dx1 and dx2. These infinitesimal changes must satisfy the condition\n\n#### p1dx1 + p2dx2 + x1dp1 + x2dp2 = dY.\n\nThe first order conditions must be satisfied at any values for the parameters. Thus it is valid to differentiate the first order conditions with respect to the parameters. (In differentiating it must be remembered that the Lagrangian multiplier λ is now also a dependent variable like x1 and x2 and a function of the parameteres p1, p2 and Y.) The result is a set of equations that must be satisfied by the infinitesimal changes; i.e.,\n\n#### (∂2U/∂x12)dx1 + (∂2U/∂x2∂x1)dx2 - p1dλ = λdp1 and (∂2U/∂x1∂x2)dx1 + (∂2U/∂x22)dx2 - p2dλ = λdp2\n\nThese equations are combined with the equation from the budget constraint\n\n#### -p1dx1 - p2dx2 = -dY + x1dp1 + x2dp2\n\nThese equations form a system which can be represented in matrix form as:\n\n(To be continued.)\n\nExample 6: This is a numerical example of the general case dealt with in the previous materical. Let U=x1x2, with p1=2, p2=1 and Y =12. The values of x1 and x2 and of λ can be determined which maximize utility. Values of x1, x2 and λ can be determined which satisfy the first order conditions. The values of the second derivatives of U at the critical level can also be determined. The second order conditions require that the principal subdeterminants of the bordered Hessian matrix made up of the second derivatives and the prices should have specified signs.\n\nThe equations satisfied by effects of changes in the parameters can be created from the first order conditions. This solutions for the effects of the changes in the parameters can be expressed in terms of Cramer's rule as the ratio of determinants. The denominator of these ratios is a determinant whose sign is known from the second order conditions. Thus in many cases comparative statics results can be established with the combined use of the first order and second order conditions.\n\n(To be continued.)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90887874,"math_prob":0.99376434,"size":11803,"snap":"2019-26-2019-30","text_gpt3_token_len":2594,"char_repetition_ratio":0.15713196,"word_repetition_ratio":0.05263158,"special_character_ratio":0.21316615,"punctuation_ratio":0.07753655,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983029,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T15:32:07Z\",\"WARC-Record-ID\":\"<urn:uuid:f06c7e4a-f7c3-46ed-88a8-97d7c457c862>\",\"Content-Length\":\"21308\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d88b69e5-5d05-4bf3-8182-6f7a44bacb48>\",\"WARC-Concurrent-To\":\"<urn:uuid:45df262a-8f8a-4121-886c-a7c52d62c31e>\",\"WARC-IP-Address\":\"130.65.218.11\",\"WARC-Target-URI\":\"http://www.sjsu.edu/faculty/watkins/compstat.htm\",\"WARC-Payload-Digest\":\"sha1:IYSFITSQVOU7PPJRF6KGTNWIT4GIEKI7\",\"WARC-Block-Digest\":\"sha1:PMK6S6IE7FA2DYPC7RGVR32T75MV5PDY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195529480.89_warc_CC-MAIN-20190723151547-20190723173547-00425.warc.gz\"}"} |
https://wiki.pathmind.com/convolutional-network | [
"# A Beginner's Guide to Convolutional Neural Networks (CNNs)\n\nContents\n\n## Introduction to Deep Convolutional Neural Networks\n\nConvolutional neural networks are neural networks used primarily to classify images (i.e. name what they see), cluster images by similarity (photo search), and perform object recognition within scenes. For example, convolutional neural networks (ConvNets or CNNs) are used to identify faces, individuals, street signs, tumors, platypuses (platypi?) and many other aspects of visual data.\n\nThe efficacy of convolutional nets in image recognition is one of the main reasons why the world has woken up to the efficacy of deep learning. In a sense, CNNs are the reason why deep learning is famous. The success of a deep convolutional architecture called AlexNet in the 2012 ImageNet competition was the shot heard round the world. CNNs are powering major advances in computer vision (CV), which has obvious applications for self-driving cars, robotics, drones, security, medical diagnoses, and treatments for the visually impaired.\n\nConvolutional networks can also perform more banal (and more profitable), business-oriented tasks such as optical character recognition (OCR) to digitize text and make natural-language processing possible on analog and hand-written documents, where the images are symbols to be transcribed.\n\nCNNs are not limited to image recognition, however. They have been applied directly to text analytics. And they be applied to sound when it is represented visually as a spectrogram, and graph data with graph convolutional networks.\n\n## Images Are 4-D Tensors?\n\nConvolutional neural networks ingest and process images as tensors, and tensors are matrices of numbers with additional dimensions.\n\nThey can be hard to visualize, so let’s approach them by analogy. A scalar is just a number, such as 7; a vector is a list of numbers (e.g., `[7,8,9]`); and a matrix is a rectangular grid of numbers occupying several rows and columns like a spreadsheet. Geometrically, if a scalar is a zero-dimensional point, then a vector is a one-dimensional line, a matrix is a two-dimensional plane, a stack of matrices is a three-dimensional cube, and when each element of those matrices has a stack of feature maps attached to it, you enter the fourth dimension. For reference, here’s a 2 x 2 matrix:\n\n``````[ 1, 2 ]\n[ 5, 8 ]\n``````\n\nA tensor encompasses the dimensions beyond that 2-D plane. You can easily picture a three-dimensional tensor, with the array of numbers arranged in a cube. Here’s a 2 x 3 x 2 tensor presented flatly (picture the bottom element of each 2-element array extending along the z-axis to intuitively grasp why it’s called a 3-dimensional array):",
null,
"In code, the tensor above would appear like this: `[[[2,3],[3,5],[4,7]],[[3,4],[4,6],[5,8]]].` And here’s a visual:",
null,
"In other words, tensors are formed by arrays nested within arrays, and that nesting can go on infinitely, accounting for an arbitrary number of dimensions far greater than what we can visualize spatially. A 4-D tensor would simply replace each of these scalars with an array nested one level deeper. Convolutional networks deal in 4-D tensors like the one below (notice the nested array).",
null,
"With some tools, you will see `NDArray` used synonymously with tensor, or multi-dimensional array. A tensor’s dimensionality `(1,2,3…n)` is called its order; i.e. a fifth-order tensor would have five dimensions.\n\nThe width and height of an image are easily understood. The depth is necessary because of how colors are encoded. Red-Green-Blue (RGB) encoding, for example, produces an image three layers deep. Each layer is called a “channel”, and through convolution it produces a stack of feature maps (explained below), which exist in the fourth dimension, just down the street from time itself. (Features are just details of images, like a line or curve, that convolutional networks create maps of.)\n\nSo instead of thinking of images as two-dimensional areas, in convolutional nets they are treated as four-dimensional volumes. These ideas will be explored more thoroughly below.\n\n## Convolutional Definition\n\nFrom the Latin convolvere, “to convolve” means to roll together. For mathematical purposes, a convolution is the integral measuring how much two functions overlap as one passes over the other. Think of a convolution as a way of mixing two functions by multiplying them.",
null,
"Credit: Mathworld. “The green curve shows the convolution of the blue and red curves as a function of t, the position indicated by the vertical green line. The gray region indicates the product `g(tau)f(t-tau)` as a function of t, so its area as a function of t is precisely the convolution.”\n\nLook at the tall, narrow bell curve standing in the middle of a graph. The integral is the area under that curve. Near it is a second bell curve that is shorter and wider, drifting slowly from the left side of the graph to the right. The product of those two functions’ overlap at each point along the x-axis is their convolution. So in a sense, the two functions are being “rolled together.”\n\nWith image analysis, the static, underlying function (the equivalent of the immobile bell curve) is the input image being analyzed, and the second, mobile function is known as the filter, because it picks up a signal or feature in the image. The two functions relate through multiplication. To visualize convolutions as matrices rather than as bell curves, please see Andrej Karpathy’s excellent animation under the heading “Convolution Demo.”\n\nThe next thing to understand about convolutional nets is that they are passing many filters over a single image, each one picking up a different signal. At a fairly early layer, you could imagine them as passing a horizontal line filter, a vertical line filter, and a diagonal line filter to create a map of the edges in the image.\n\nConvolutional networks take those filters, slices of the image’s feature space, and map them one by one; that is, they create a map of each place that feature occurs. By learning different portions of a feature space, convolutional nets allow for easily scalable and robust feature engineering.\n\n(Note that convolutional nets analyze images differently than RBMs. While RBMs learn to reconstruct and identify the features of each image as a whole, convolutional nets learn images in pieces that we call feature maps.)\n\nSo convolutional networks perform a sort of search. Picture a small magnifying glass sliding left to right across a larger image, and recommencing at the left once it reaches the end of one pass (like typewriters do). That moving window is capable recognizing only one thing, say, a short vertical line. Three dark pixels stacked atop one another. It moves that vertical-line-recognizing filter over the actual pixels of the image, looking for matches.\n\nEach time a match is found, it is mapped onto a feature space particular to that visual element. In that space, the location of each vertical line match is recorded, a bit like birdwatchers leave pins in a map to mark where they last saw a great blue heron. A convolutional net runs many, many searches over a single image – horizontal lines, diagonal ones, as many as there are visual elements to be sought.\n\nConvolutional nets perform more operations on input than just convolutions themselves.\n\nAfter a convolutional layer, input is passed through a nonlinear transform such as tanh or rectified linear unit, which will squash input values into a range between -1 and 1.\n\n## How Convolutional Neural Networks Work\n\nThe first thing to know about convolutional networks is that they don’t perceive images like humans do. Therefore, you are going to have to think in a different way about what an image means as it is fed to and processed by a convolutional network.\n\n### Interested in reinforcement learning?\n\nAutomatically apply RL to simulation use cases (e.g. call centers, warehousing, etc.) using Pathmind.\n\nConvolutional networks perceive images as volumes; i.e. three-dimensional objects, rather than flat canvases to be measured only by width and height. That’s because digital color images have a red-blue-green (RGB) encoding, mixing those three colors to produce the color spectrum humans perceive. A convolutional network ingests such images as three separate strata of color stacked one on top of the other.\n\nSo a convolutional network receives a normal color image as a rectangular box whose width and height are measured by the number of pixels along those dimensions, and whose depth is three layers deep, one for each letter in RGB. Those depth layers are referred to as channels.\n\nAs images move through a convolutional network, we will describe them in terms of input and output volumes, expressing them mathematically as matrices of multiple dimensions in this form: 30x30x3. From layer to layer, their dimensions change for reasons that will be explained below.\n\nYou will need to pay close attention to the precise measures of each dimension of the image volume, because they are the foundation of the linear algebra operations used to process images.\n\nNow, for each pixel of an image, the intensity of R, G and B will be expressed by a number, and that number will be an element in one of the three, stacked two-dimensional matrices, which together form the image volume.\n\nThose numbers are the initial, raw, sensory features being fed into the convolutional network, and the ConvNets purpose is to find which of those numbers are significant signals that actually help it classify images more accurately. (Just like other feedforward networks we have discussed.)\n\nRather than focus on one pixel at a time, a convolutional net takes in square patches of pixels and passes them through a filter. That filter is also a square matrix smaller than the image itself, and equal in size to the patch. It is also called a kernel, which will ring a bell for those familiar with support-vector machines, and the job of the filter is to find patterns in the pixels.\n\nCredit for this excellent animation goes to Andrej Karpathy.\n\nImagine two matrices. One is 30x30, and another is 3x3. That is, the filter covers one-hundredth of one image channel’s surface area.\n\nWe are going to take the dot product of the filter with this patch of the image channel. If the two matrices have high values in the same positions, the dot product’s output will be high. If they don’t, it will be low. In this way, a single value – the output of the dot product – can tell us whether the pixel pattern in the underlying image matches the pixel pattern expressed by our filter.\n\nLet’s imagine that our filter expresses a horizontal line, with high values along its second row and low values in the first and third rows. Now picture that we start in the upper lefthand corner of the underlying image, and we move the filter across the image step by step until it reaches the upper righthand corner. The size of the step is known as stride. You can move the filter to the right one column at a time, or you can choose to make larger steps.\n\nAt each step, you take another dot product, and you place the results of that dot product in a third matrix known as an activation map. The width, or number of columns, of the activation map is equal to the number of steps the filter takes to traverse the underlying image. Since larger strides lead to fewer steps, a big stride will produce a smaller activation map. This is important, because the size of the matrices that convolutional networks process and produce at each layer is directly proportional to how computationally expensive they are and how much time they take to train. A larger stride means less time and compute.\n\nA filter superimposed on the first three rows will slide across them and then begin again with rows 4-6 of the same image. If it has a stride of three, then it will produce a matrix of dot products that is 10x10. That same filter representing a horizontal line can be applied to all three channels of the underlying image, R, G and B. And the three 10x10 activation maps can be added together, so that the aggregate activation map for a horizontal line on all three channels of the underlying image is also 10x10.\n\nNow, because images have lines going in many directions, and contain many different kinds of shapes and pixel patterns, you will want to slide other filters across the underlying image in search of those patterns. You could, for example, look for 96 different patterns in the pixels. Those 96 patterns will create a stack of 96 activation maps, resulting in a new volume that is 10x10x96. In the diagram below, we’ve relabeled the input image, the kernels and the output activation maps to make sure we’re clear.",
null,
"What we just described is a convolution. You can think of Convolution as a fancy kind of multiplication used in signal processing. Another way to think about the two matrices creating a dot product is as two functions. The image is the underlying function, and the filter is the function you roll over it.",
null,
"One of the main problems with images is that they are high-dimensional, which means they cost a lot of time and computing power to process. Convolutional networks are designed to reduce the dimensionality of images in a variety of ways. Filter stride is one way to reduce dimensionality. Another way is through downsampling.\n\n## Max Pooling/Downsampling with CNNs\n\nThe next layer in a convolutional network has three names: max pooling, downsampling and subsampling. The activation maps are fed into a downsampling layer, and like convolutions, this method is applied one patch at a time. In this case, max pooling simply takes the largest value from one patch of an image, places it in a new matrix next to the max values from other patches, and discards the rest of the information contained in the activation maps.",
null,
"Credit to Andrej Karpathy.\n\nOnly the locations on the image that showed the strongest correlation to each feature (the maximum value) are preserved, and those maximum values combine to form a lower-dimensional space.\n\nMuch information about lesser values is lost in this step, which has spurred research into alternative methods. But downsampling has the advantage, precisely because information is lost, of decreasing the amount of storage and processing required.\n\n### Alternating Layers\n\nThe image below is another attempt to show the sequence of transformations involved in a typical convolutional network.",
null,
"From left to right you see:\n\n• The actual input image that is scanned for features. The light rectangle is the filter that passes over it.\n• Activation maps stacked atop one another, one for each filter you employ. The larger rectangle is one patch to be downsampled.\n• The activation maps condensed through downsampling.\n• A new set of activation maps created by passing filters over the first downsampled stack.\n• The second downsampling, which condenses the second set of activation maps.\n• A fully connected layer that classifies output with one label per node.\n\nAs more and more information is lost, the patterns processed by the convolutional net become more abstract and grow more distant from visual patterns we recognize as humans. So forgive yourself, and us, if convolutional networks do not offer easy intuitions as they grow deeper."
] | [
null,
"https://wiki.pathmind.com/images/wiki/tensor.png",
null,
"https://wiki.pathmind.com/images/wiki/3d_matrix_cube.png",
null,
"https://wiki.pathmind.com/images/wiki/3d_matrix.png",
null,
"https://wiki.pathmind.com/images/wiki/convgaus.gif",
null,
"https://wiki.pathmind.com/images/wiki/karpathy-convnet-labels.png",
null,
"https://wiki.pathmind.com/images/wiki/convgaus.gif",
null,
"https://wiki.pathmind.com/images/wiki/maxpool.png",
null,
"https://wiki.pathmind.com/images/wiki/convnet.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92866796,"math_prob":0.96033907,"size":14464,"snap":"2020-34-2020-40","text_gpt3_token_len":3014,"char_repetition_ratio":0.14944676,"word_repetition_ratio":0.0033236393,"special_character_ratio":0.20243363,"punctuation_ratio":0.10939187,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9762523,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,4,null,4,null,4,null,8,null,4,null,8,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T13:34:51Z\",\"WARC-Record-ID\":\"<urn:uuid:6771cbf7-f153-44a5-930f-2ce39e09003a>\",\"Content-Length\":\"43197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35214434-1020-4db6-934e-e3bcb4f0ea6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f7128dc-a23d-4700-8f32-8aafabfdff27>\",\"WARC-IP-Address\":\"104.26.0.128\",\"WARC-Target-URI\":\"https://wiki.pathmind.com/convolutional-network\",\"WARC-Payload-Digest\":\"sha1:J4CU7CLO5SD4IC6BBOJ67XE4DNMSRVGD\",\"WARC-Block-Digest\":\"sha1:PLN3XPIONYFHODHX7OK4Z6K3H5CRV245\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401643509.96_warc_CC-MAIN-20200929123413-20200929153413-00109.warc.gz\"}"} |
https://www.kseebsolutionsfor.com/class-10-maths-chapter-11-introduction-to-trigonometry-ex-11-4/ | [
"# KSEEB SSLC Class 10 Maths Solutions Chapter 11 Introduction to Trigonometry Ex 11.4\n\nIn this chapter, we provide KSEEB SSLC Class 10 Maths Solutions Solutions Chapter 11 Introduction to Trigonometry Ex 11.4 for English medium students, Which will very helpful for every student in their exams. Students can download the latest KSEEB SSLC Class 10 Maths Solutions Solutions Chapter 11 Introduction to Trigonometry Ex 11.4 pdf, free KSEEB SSLC Class 10 Maths Solutions Solutions Chapter 11 Introduction to Trigonometry Ex 11.4 pdf download. Now you will get step by step solution to each question.\n\n### Karnataka State Syllabus Class 10 Maths SolutionsChapter 11 Introduction to Trigonometry Ex 11.4\n\nQuestion 1.\nExpress the trigonometric ratios sin A, sec A and tan A in terms of cot A.\nSolution:\ncosec2 A – cot2 A = 1\ncosec2 A = 1 + cot2 A\ncosec2 A = cot2 A + 1",
null,
"Question 2.\nWrite all the other trigonometric ratios of ∠A in terms of sec A.\nSolution:\ni) sin2 A + cos2 A = 1\nsin2 A = 1 – cos2 A",
null,
"",
null,
"Question 3.\nEvaluate :\ni) sin263∘+sin27∘cos217∘+cos273∘\nii) sin 25° cos 65° + cos 25° sin 65°\nSolution:",
null,
"ii) sin 25° cos 65° + cos 25° sin 65°\n= sin 25° cos (90° – 25°) + cos 25° sin (90° – 25°)\n= sin 25° sin 25° + cos 25° + cos 25°\n= sin2 25° + cos2 25°\n= 1. [∵ cos2 θ + sin2 θ = 1]\n\nQuestion 4.\nChoose the correct option. Justify your choice.\ni) 9 sec2 A – 9 tan2 A.\nA) 1\nB) 9\nC) 8\nD) 0\nSolution:\nB) 9\n9 sec2 A – 9 tan2 A\n= 9(sec2 A – tan2 A)\n= 9 × 1\n= 9\n\nii) (1+tan θ + sec θ) (1+ cot θ- cosec θ) =\nA) 0\nB) 1\nC) 2\nD) -1\nSolution:\nC) 2\n(1+tan θ + sec θ) (1+ cot θ- cosec θ)",
null,
"",
null,
"iii) (sec A + tan A) (1 – sin A) =\nA) sec A\nB) sin A\nC) cosec A\nD) cos A\nSolution:\nD) cos A\n(sec A + tan A) (1 – sin A)",
null,
"iv) 1+tan2A1+cot2A =\nA) sec2 A\nB) -1\nC) cot2 A\nD) tan2 A\nSolution:\nD) tan2 A",
null,
"Question 5.\nProve the following identities, where the angles involved are acute angles for which the expressions are defined.",
null,
"Solution:",
null,
"",
null,
"Solution:",
null,
"",
null,
"Solution:",
null,
"",
null,
"",
null,
"Solution:",
null,
"",
null,
"Solution:",
null,
"",
null,
"",
null,
"Solution:",
null,
"",
null,
"Solution:",
null,
"",
null,
"",
null,
"Solution:",
null,
"",
null,
"Solution:",
null,
"",
null,
"Solution:",
null,
"",
null,
"All Chapter KSEEB Solutions For Class 10 Maths\n\n—————————————————————————–\n\nAll Subject KSEEB Solutions For Class 10\n\n*************************************************\n\nI think you got complete solutions for this chapter. If You have any queries regarding this chapter, please comment on the below section our subject teacher will answer you. We tried our best to give complete solutions so you got good marks in your exam.\n\nIf these solutions have helped you, you can also share kseebsolutionsfor.com to your friends.\n\nBest of Luck!!"
] | [
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-1.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-2.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-3.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-4.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-5.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-6.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-7.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-8.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-9.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-10.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-11.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-12.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-13.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-14.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-15.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-16.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-17.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-18.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-19.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-20.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-21.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-22.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-23.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-24.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-25.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-26.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-27.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-29.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-30.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-31.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/05/KSEEB-SSLC-Class-10-Maths-Solutions-Chapter-11-Introduction-to-Trigonometry-Ex-11.4-32.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.626924,"math_prob":0.9509964,"size":2426,"snap":"2021-31-2021-39","text_gpt3_token_len":819,"char_repetition_ratio":0.18744838,"word_repetition_ratio":0.16772823,"special_character_ratio":0.3594394,"punctuation_ratio":0.10282258,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9886461,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T06:52:04Z\",\"WARC-Record-ID\":\"<urn:uuid:87e330b0-e061-43df-be27-ec88733dc3d3>\",\"Content-Length\":\"81382\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28ae521a-dadd-4c93-95b8-2c189f993c91>\",\"WARC-Concurrent-To\":\"<urn:uuid:25545815-7357-4b4c-bcad-b21aa36f71b9>\",\"WARC-IP-Address\":\"104.21.39.88\",\"WARC-Target-URI\":\"https://www.kseebsolutionsfor.com/class-10-maths-chapter-11-introduction-to-trigonometry-ex-11-4/\",\"WARC-Payload-Digest\":\"sha1:2ZJ6N3ZOTSAEON2X3SXVFE3556WR6LCT\",\"WARC-Block-Digest\":\"sha1:HII73PMQQBQ2ZK4SSVZAYP3O5LPDVCIG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060538.11_warc_CC-MAIN-20210928062408-20210928092408-00289.warc.gz\"}"} |
https://electronics.stackexchange.com/questions/24160/how-to-calculate-the-time-of-charging-and-discharging-of-battery/162813 | [
"# How to Calculate the time of Charging and Discharging of battery?\n\nHow do I calculate the approximated time for the Charging and Discharging of the battery? Is there any equation available for the purpose? If yes, then please provide me.\n\nDischarge time is basically the Ah or mAh rating divided by the current.\n\nSo for a 2200mAh battery with a load that draws 300mA you have:\n\n$\\frac{2.2}{0.3} = 7.3 hours$*\n\nThe charge time depends on the battery chemistry and the charge current.\n\nFor NiMh, for example, this would typically be 10% of the Ah rating for 10 hours.\n\nOther chemistries, such as Li-Ion, will be different.\n\n*2200mAh is the same as 2.2Ah. 300mA is the same as 0.3A\n\n• I would like to point out that some batteries, and certainly all circuits, will not work down to 0 Volts at the supply, so your circuit will stop working a long time before the battery is fully drained – chwi Aug 4 '15 at 18:24\n• This answer is about 4 years old now - but it's incorrect for LiIon batteries, which is what he is asking about. (Advised after this answer). See my answer for detail - but, LiIon can typically be charged at the C/1 rate until Vbat = 4.2V/cell. That takes typically 45 minutes to about 75% capacity and then about 2 hours at reducing rate for the balance . – Russell McMahon Nov 18 '15 at 15:39\n\nCharging of battery: Example: Take 100 AH battery. If the applied Current is 10 Amperes, then it would be 100Ah/10A= 10 hrs approximately. It is an usual calculation.\n\nDischarging: Example: Battery AH X Battery Volt / Applied load. Say, 100 AH X 12V/ 100 Watts = 12 hrs (with 40% loss at the max = 12 x 40 /100 = 4.8 hrs) For sure, the backup will lasts up to 4.8 hrs.\n\n• The charge formula above assumes a 100% efficiency charge, so it's not ideal, but it is a good, simple way to get a rough idea of charge time. For a more accurate estimation, you can assume 80% efficiency for NiCd and NiMh batteries and 90% efficiency for LiIon/LiPo batteries. Then, the formula becomes capacity / (efficiency * chargeRate) or, to use the same values from above (assuming lithium chemistry), 100Ah / (0.9 * 10A) = 11.11 hours – KOGI Aug 21 '13 at 15:58\n\nDischarge rates are well enough covered here.\n\nLiIon / LiPo have almost 100 current charge efficiency but energy charge efficiency depends on charge rate. H=Higher charge rates have lower energy efficiencies as resistive losses increase towards the end of charging.\n\nBelow LiIon and LiPo are interchangeable in this context.\n\nThe main reason to adding an answer to a 3+ year old question is to note that:\n\nLiIon / LiPo should not be charged at above manufacturers spec. This is usually C/1, sometimes C/2 and very occasionally 2C. Usually C/1 is safe.\n\nLiIon's are charged at CC = constant current = <= max allowed current from 'empty' until charge voltage reaches 4.2V. They are then charged at CV = constant voltage = 4.2V and the current falls under battery chemistry control.\n\nCharge endpoint is reached when I_charge in CV mode falls to some preset % of Imax - typically 25%. Higher % termination current = longer cycle life, lower charge time and slightly less capacity for the following discharge cycle.\n\nWhen charged from \"empty\" at C/1 a LiIon cell achieves about 70% - 80% of full charge in 0.6 to 0.7 hours ~= 40 to 50 minutes.\n\nThe CV stage typically takes 1.5 to 2 hours (depending on termination current% and other factors) so total charge time is about 40m +1.5 hours to 50 minutes +2 hours or typically 2+ to 3 hours overall. But, a very useful % of total charge is reached in 1 hour.\n\nPeukert's Law gives you the capacity of the battery in terms of the discharge rate. Lower the discharge rate higher the capacity. As the discharge rate ( Load) increases the battery capacity decereases.\n\nThis is to say if you dischage in low current the battery will give you more capacity or longer discharge . For charging calculate the Ah discharged plus 20% of the Ah discharged if its a gel battery. The result is the total Ah you will feed in to fully recharge.\n\n• How can you advise to charge with 10 A if you have absolutely no specs of the battery in question? – stevenvh Oct 11 '12 at 10:08\n\nIn the ideal/theoretical case, the time would be t = capacity/current. If the capacity is given in amp-hours and current in amps, time will be in hours (charging or discharging). For example, 100 Ah battery delivering 1A, would last 100 hours. Or if delivering 100A, it would last 1 hour. In other words, you can have \"any time\" as long as when you multiply it by the current, you get 100 (the battery capacity).\n\nHowever, in the real/practical world, you have to take into consideration the heat generated in each process, the efficiency, the type of battery, the operating range, and other variables. This is where \"rules of thumb\" come in. If you want a the battery to last a \"long\" time and no overheating, then the charging or discharging current must be kept at not more than 1/10 of the rated capacity. You also need to keep in mind that a battery is not supposed to be \"fully\" discharged. Typically, a battery is considered \"discharged\" when it looses 1/3 of its capacity, therefore it only needs 1/3 of its capacity to be fully charged (range of operation). With these constraints and the above values, one gets only one answer, t = 33Ah/10A = 3.3hr.\n\nRules of thumb given in other answers are often good enough but if you can find the datasheet of the battery it's best to check the relevant graph. As an example here's the datasheet of a low cost 12V battery. In the datasheet you'll find this graph:",
null,
"Let's say that this is a battery with 7Ahr capacity and that you want to draw 14A. You'll have to observe the 2C curve (2C means to discharge at 7Ahr*2/h=14A). You'll note that this battery will drop to 9.5V-10V after about 15mins. Of-course this is only true for a fresh from the shelf battery kept at 25 deg.Celsius. Temperature, age and usage negatively affect the performance.\n\n• Hi, I couldn't find the original source for that image you included. Can you please edit the answer to add a link to the original web page or PDF file etc., as required by this site rule? Thanks. – SamGibson Apr 15 '19 at 16:17\n• Fixed to help you (I'm not at all sure that the rules you linked to concern this case) – ndemou Apr 15 '19 at 16:32"
] | [
null,
"https://i.stack.imgur.com/xwzef.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9278044,"math_prob":0.93610144,"size":8341,"snap":"2019-51-2020-05","text_gpt3_token_len":2178,"char_repetition_ratio":0.13973852,"word_repetition_ratio":0.00405954,"special_character_ratio":0.27262917,"punctuation_ratio":0.10475617,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97809494,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T05:43:16Z\",\"WARC-Record-ID\":\"<urn:uuid:624b3909-0330-4d62-8776-17e95c143f25>\",\"Content-Length\":\"179245\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17f63728-678e-485a-b9ec-feca186d16ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:f94a6b22-6cea-4705-92c0-eeed66b4a8b5>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/24160/how-to-calculate-the-time-of-charging-and-discharging-of-battery/162813\",\"WARC-Payload-Digest\":\"sha1:T3XEVV56MS7PXYNLBMPYWTWSL2ZQARKR\",\"WARC-Block-Digest\":\"sha1:SR6BZHCRKRVQHMGJOTOWNLT6PT5TSBW2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250608295.52_warc_CC-MAIN-20200123041345-20200123070345-00211.warc.gz\"}"} |
https://www.elliottwaveforex.com/2016/07/page/3/ | [
"Top\n[param_value val=source]\nvalue=\"\"\n[param_value val=medium]\nvalue=\"\"\nvalue=\"\"\n[param_value val=creative]\nvalue=\"\"\n[param_value val=keyword]\nvalue=\"\"\n[param_value val=matchtype]\nvalue=\"\"\nvalue=\"\"\n[param_value val=aceid]\nvalue=\"\"\n[param_value val=device]\nvalue=\"\"\n[param_value val=devicemodel]\nvalue=\"\"\n[param_value val=target]\nvalue=\"\"\n[param_value val=placement]\nvalue=\"\""
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86680216,"math_prob":0.9996167,"size":2046,"snap":"2019-35-2019-39","text_gpt3_token_len":511,"char_repetition_ratio":0.13663076,"word_repetition_ratio":0.03814714,"special_character_ratio":0.25268817,"punctuation_ratio":0.11188811,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99851674,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T16:41:24Z\",\"WARC-Record-ID\":\"<urn:uuid:4363cbd1-4f6c-428a-b70f-fc0e929a57b7>\",\"Content-Length\":\"64940\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6ae161f-90d6-4c67-90fc-eced43e33792>\",\"WARC-Concurrent-To\":\"<urn:uuid:9be6685b-aeeb-46fd-a3a7-a4d5bdecae93>\",\"WARC-IP-Address\":\"178.128.5.163\",\"WARC-Target-URI\":\"https://www.elliottwaveforex.com/2016/07/page/3/\",\"WARC-Payload-Digest\":\"sha1:CPFYKHDKH5Q7WTUNRB6M5TDU4F4GXSVD\",\"WARC-Block-Digest\":\"sha1:Z6FKG7PURIBTYDGHVTHG2KARDWXTTLIK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574050.69_warc_CC-MAIN-20190920155311-20190920181311-00339.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.