text
stringlengths 104
605k
|
---|
Math Calculators, Lessons and Formulas
It is time to solve your math problem
mathportal.org
# Polynomial roots calculator
This online calculator finds the roots of given polynomial. For Polynomials of degree less than or equal to 4, the exact value of any roots (zeros) of the polynomial are returned. The calculator will show you the work and detailed explanation.
Able to display the work process and the detailed explanation.
Polynomial Roots Calculator
Find real and complex zeroes of a polynomial. ( show help ↓↓ )
0123456789+-.x^del
Smart zooming xmin: xmax:
working...
examples
example 1:ex 1:
find roots of the polynomial $4x^2 - 10x + 4$
example 2:ex 2:
find polynomial roots $-2x^4 - x^3 + 189$
example 3:ex 3:
solve equation $6x^3 - 25x^2 + 2x + 8 = 0$
example 4:ex 4:
find polynomial roots $2x^3−x^2−x−3$
example 5:ex 5:
find roots $2x^5−x^4−14x^3−6x^2+24x+40$
Quick Calculator Search
|
# Measures beyond Lebesgue: are Solovay's proofs extendible to them?
Gold Member
## Main Question or Discussion Point
In 1970, Solovay proved that,
although
(1) under the assumptions of ZF & "there exists a real-valued measurable cardinal", one could construct a measure μ (specifically, a countably additive extension of Lebesgue measure) such that all sets of real numbers were measurable (wrt μ),
nonetheless
(2) under the assumption of ZFC, one can construct a set (e.g., the Vitali set) which is not Lebesgue measurable.
However, I am not sure whether these proofs carry over to all measures: in other words, is it conceivable that, under ZFC & a sufficiently strong large cardinal axiom, there is a measure M so that all sets of real numbers are measurable wrt M? (For example, it would seem reasonable that the Vitali set is also not measurable by the μ in (1), but what of other measures?)
Related Set Theory, Logic, Probability, Statistics News on Phys.org
mathman
I am not familiar with the work you are discussing. However with a measure, where each point has measure 1, all sets are measurable.
pwsnafu
|
# MOOC Completion Rates
9 thoughts
last posted Jan. 7, 2015, 8:04 a.m.
4 earlier thoughts
0
I frequently drop out of MOOCs because:
1. it wasn't the right level (too easy or too hard)
2. I didn't like the style of it
3. I didn't have the time to put into it
4. there were too many other interesting MOOCs competing for my time
MOOCs let me evaluate this after I've enrolled (and been counted as a enrollment statistic).
4 later thoughts
|
# Solving coupled PDEs
by nickthequick
Tags: coupled, pdes, solving
P: 48 Hi, I am trying to simplify the following equations to get a relationship involving just $\eta$: 1) $\nabla^2 \phi(x,z,t) = 0$ for $x\in [-\infty,\infty]$ and $z\in [-\infty,0]$, $t \in [0,\infty]$ subject to the boundary conditions 2) $\phi_t+g \eta(x,t) = f(x,z,t)$ at z=0 3) $\eta_t = \phi_z$ at z=0 and 4) $\phi \to 0 \ as \ z \to -\infty$ Here, g is a constant, $\eta, \phi$ are the dependent variables of the system and f represents a forcing function. Another important constraint is that for systems I'm interested in, f is non zero only for a small time interval. For the case where f=0, one can find that $\eta_{tt}-\frac{g}{k} \eta_{xx} =0$ where k is the wavenumber of the system. I want to find an analogous relation when forcing is present. Any help is appreciated, Nick
PF Patron P: 162 Seems to me like you need some conditions on the t=0 face of the domain.
P: 48 In the case where $f=0$ it is clear, from the fact that $\eta(x,t)$ is governed by a wave equation, that we will need to know $\eta (x,0)$ and $\eta_t(x,0)$ to completely describe the (d'alambert) solution. Let us say that before the forcing occurs, we know both $\eta(x,0) , \eta_t(x,0)$. I do not see how this helps me find the 'particular' solution to this system of equations. This problem comes from physics - namely, it's the solution to (conservatively) forced, inviscid, irrotational surface gravity waves. The forcing that I'm interested acts in a 'spatially compact' region over a short time, say from $[t_o,t_o+\Delta t]$. As a first step, i'm trying to solve this in the limit that the forcing is all concentrated at a particular point in space and time $(x_o,z_o,t_o) =(0,0,0)$ but have not made any headway.
P: 48
## Solving coupled PDEs
Also, an alternative way of looking at this problem is the following: The form of the Bernoulli equation in post 1 (condition 2) comes from
$\vec{u}_t=-\frac{1}{\rho} \nabla p + \vec{F}$
Where $\vec{F} = \vec{\nabla} f$.
The reason I took the route I did in post 1 was to avoid discussion of the pressure field, but an alternative way to look at this problem is by resolving this field. By taking the divergence of the Navier Stokes equation, we find
$\nabla^2 p = \nabla \cdot \vec{F}$
such that p=0 at z=0 and $\nabla p \to 0$ as $x \to \pm \infty$
If I can solve for the pressure field, then I can find the vertical velocity, $\phi_z$ at z=0 and then from there resolve the form of $\eta(x,t)$
I am trying to solve this for a very simple form of the forcing - namely $F = C_o \delta(x_o,z_,t_o) \ \hat{x}$ but have not made much progress.
Related Discussions Differential Equations 3 Differential Equations 0 Differential Equations 1 Differential Equations 0 Differential Equations 0
|
Report
# High-Performance Electrocatalysts for Oxygen Reduction Derived from Polyaniline, Iron, and Cobalt
See allHide authors and affiliations
Science 22 Apr 2011:
Vol. 332, Issue 6028, pp. 443-447
DOI: 10.1126/science.1200832
## Abstract
The prohibitive cost of platinum for catalyzing the cathodic oxygen reduction reaction (ORR) has hampered the widespread use of polymer electrolyte fuel cells. We describe a family of non–precious metal catalysts that approach the performance of platinum-based systems at a cost sustainable for high-power fuel cell applications, possibly including automotive power. The approach uses polyaniline as a precursor to a carbon-nitrogen template for high-temperature synthesis of catalysts incorporating iron and cobalt. The most active materials in the group catalyze the ORR at potentials within ~60 millivolts of that delivered by state-of-the-art carbon-supported platinum, combining their high activity with remarkable performance stability for non–precious metal catalysts (700 hours at a fuel cell voltage of 0.4 volts) as well as excellent four-electron selectivity (hydrogen peroxide yield <1.0%).
Thanks to the high energy yield and low environmental impact of hydrogen oxidation, the polymer electrolyte fuel cell (PEFC) represents one of the most promising energy conversion technologies available today. Of the many possible applications, ranging from sub-watt remote sensors to residential power generators in excess of 100 kW, automotive transportation is especially attractive. PEFCs promise major improvements over gasoline combustion, including better overall fuel efficiency and reduction in emissions (including CO2). The spectacular progress in fuel cell technology notwithstanding, a large-scale market introduction of fuel cell–powered vehicles continues to face various challenges, such as the lack of hydrogen infrastructure and the technical issues associated with PEFC performance and durability under the operating conditions of an automotive power plant. The high cost of producing PEFCs represents the most formidable challenge and has driven much of the applied and fundamental fuel cell research in recent years.
According to the latest cost analysis, the fuel cell—more precisely, the fuel cell stack—is responsible for more than 50% of the PEFC power system cost (1, 2). Although a state-of-the-art PEFC stack uses several high-priced components, the catalysts are by far the most expensive constituent, accounting for more than half of the stack cost. Because catalysts at both the fuel cell anode and cathode are based on platinum (Pt) or platinum alloys, their cost is directly linked to the price of Pt in the volatile and highly monopolized precious metal market. The precious metal catalyst is the only fuel cell stack component that will not benefit from economies of scale, and an increase in the demand for fuel cell power systems is bound to drive up the already high price of Pt, about $1830 per troy ounce at present ($2280 per troy ounce at its maximum in March 2008) (3). Thus, PEFCs are in need of efficient, durable, and inexpensive alternatives to Pt and Pt-based catalysts.
Ideally, Pt should be replaced at both fuel cell electrodes; however, its substitution at the cathode with a non–precious metal catalyst would have comparatively greater impact, because the slow oxygen reduction reaction (ORR) at this electrode requires much more Pt than the faster hydrogen oxidation at the anode. As a consequence, the development of non–precious metal catalysts with high ORR activity has recently become a major focus of PEFC research (48). The Pt replacement candidates that have attracted the most attention have been synthesized by heating precursors comprising nitrogen, carbon, and geologically abundant transition metals, iron and cobalt (M = Co and/or Fe) in particular (914). Although the nature of the active ORR catalytic sites in such N-M-C catalysts continues to be at the center of an ongoing debate (6, 7, 10, 15), there is no doubt that the ORR performance of N-M-C catalysts strongly depends on the type of nitrogen and transition-metal precursors used, heat treatment temperature, carbon support morphology, and synthesis conditions.
We recently initiated a research effort to develop non–precious metal catalysts that combine high ORR activity with good performance stability, originally concentrating on materials obtained without heat treatment. The polypyrrole (PPy)-Co-C system prepared this way showed respectable performance durability for a non–precious metal catalyst, but its oxygen reduction activity remained relatively low (5). We then shifted toward high-temperature systems synthesized using predominantly iron, cobalt, and heteroatom polymer precursors (polypyrrole and polyaniline) (16, 17). Such nitrogen-derived non–precious metal ORR catalysts have been under development for several decades, starting with the early work by Jasinski (18) and by Yeager and co-workers (19). Their research concentrated in particular on pyrolyzed transition metal–containing macrocycles and yielded catalysts that offered good ORR activity but suffered from poor stability in an acidic environment (20). The expensive macrocycles were later replaced in numerous studies by various combinations of nitrogen-containing compounds, transition-metal inorganic salts, and carbons, which ultimately led to considerable improvements in ORR activity but relatively little progress in stability. Polyaniline (PANI), which represents a favorable combination of aromatic rings connected via nitrogen-containing groups, was selected for this study as a promising template compound for nitrogen and carbon. Because of the similarity between the structures of PANI and graphite, the heat treatment of PANI could facilitate the incorporation of nitrogen-containing active sites into the partially graphitized carbon matrix. Furthermore, the use of such a polymer as a nitrogen precursor promised a more uniform distribution of nitrogen sites on the surface and an increase in the active-site density. In our effort, although several catalysts have shown promising oxygen reduction activity, only PANI-derived formulations appear to combine high ORR activity with unique performance durability for heat-treated non–precious metal catalysts. These catalysts are the subject of this report.
A schematic diagram describing the catalyst synthesis is shown in Fig. 1. In the approach used, a short-chain aniline oligomer was first mixed with high–surface area carbon material, pristine Ketjenblack EC-300J or modified Ketjenblack in the case of PANI-FeCo-C(2) (21), and transition metal precursors [cobalt(II) nitrate and/or iron(III) chloride], followed by the addition of (NH4)2S2O8 (ammonium persulfate, APS) as an oxidant to fully polymerize the aniline. After polymerization, water was evaporated from the suspension and the remaining solid phase was subjected to heat treatments in the range 400° to 1000°C under a N2 atmosphere. The heat-treated product was then preleached in 0.5 M H2SO4 at 80° to 90°C for 8 hours to remove any unstable and ORR-nonreactive phases. The preleached catalyst then underwent a second heat treatment under N2 as the final step of the synthesis (21).
The disparity in the precious and non–precious metal catalyst loading notwithstanding, the performance gap between a state-of-the-art Pt/C (E-TEK) and PANI-Fe-C, expressed as a half-wave potential difference (ΔE½) in rotating disk electrode (RDE) testing, has been substantially reduced in this work to 43 mV relative to Pt/C at a “standard” loading of 20 μgPt cm−2 (Fig. 2A) and 59 mV relative to Pt/C at a “high” loading of 60 μgPt cm−2.
Whereas cyclic voltammograms (CVs) of PANI-C and PANI-Co-C in N2-saturated H2SO4 solution are virtually featureless, the CV of PANI-Fe-C reveals a pair of well-developed redox peaks at ~0.64 V (fig. S1). The full width at half maximum (FWHM) of these peaks is ~100 mV, which is very close to the theoretical value of 96 mV expected for a reversible one-electron process involving surface species (27). There are two surface processes that can possibly give rise to the observed redox behavior in this case: (i) one-electron reduction/oxidation of the surface quinone/hydroquinone groups (28), and (ii) Fe3+/Fe2+ reduction/oxidation. In support of the latter reaction, an in situ electrochemical x-ray absorption study of the PANI-Fe-C system shows a correlation between the change in the oxidation state of Fe species in the catalysts and the potential of the reversible CV feature in the PANI-Fe-C catalyst voltammetry (17).
Additional kinetic data (fig. S2 and table S1) reveal differences in the Tafel slope of oxygen reduction on the different catalysts studied in this work. A Tafel slope of 67 mV decade−1 was measured for PANI-Co-C—a much lower value than the Tafel slope of 87 mV decade−1 obtained for PANI-Fe-C. The rate-determining step of the ORR for the latter catalyst is likely to simultaneously involve the migration of reaction intermediates and charge transfer. The exchange current density (i0) is nearly two orders of magnitude higher for the PANI-Fe-C catalyst (4 × 10−8 A cm−2) than for PANI-Co-C (5 × 10−10 A cm−2). Together with the differences in the onset potential of oxygen reduction (21), the Tafel slope (mass transport–corrected), and four-electron selectivity (Fig. 2A) already described, the disparity of two orders of magnitude in the i0 value for the two catalysts implies that the ORR-active sites and reaction mechanisms are likely different in both cases. Ex situ x-ray absorption analysis provides evidence for a different chemical environment in each case. Cobalt coordination in the PANI-Co-C catalyst appears to closely resemble that in Co9S8, with the dominant x-ray absorption fine structure (XAFS) peak between 2 to 3 Å consistent with a known Co-Co shell (29). XAFS of the PANI-Fe-C catalyst shows a peak at ~1.50 Å, which is indicative of coordination to a lighter element (either N or O) at a much shorter distance than nearest neighbors in metallic Fe (XAFS peak at ~2.2 Å) or in FeN4-type structures in Fe macrocycles (~1.63 Å) (30).
In catalyst synthesis chemistry, the heat treatment temperature is a major factor in inducing catalytic activity of PANI-derived catalysts and assuring performance stability. We used an RDE to study the ORR activity of a PANI-Fe-C catalyst as a function of the heat treatment temperature in the range 400° to 1000°C. RDE studies were conducted at room temperature and in 0.5 M H2SO4 electrolyte (Fig. 2B). The performance of the catalyst synthesized by heat treatment at 400°C is very similar to that of Ketjenblack itself (bottom part of Fig. 2A, plot 1). The activity, as measured by the ORR onset and half-wave potentials (E½) in the RDE polarization plots, increases upon raising the heat treatment temperature up to 900°C and then drops for catalysts synthesized at even higher temperatures. The H2O2 yield measured for the best-performing PANI-Fe-C catalyst, heat treated at 900°C, is below 1% over the potential range from 0.1 to 0.8 V versus RHE, signaling virtually complete reduction of O2 to H2O in a four-electron process. This avoidance of the much less efficient, and therefore undesirable, two-electron reaction to peroxide matches, and possibly exceeds, the four-electron selectivity of Pt-based catalysts (3 to 4% H2O2 yield at 0.4 V on 14 μgPt cm−2 Pt/C) (31).
We next conducted extensive physical characterization to obtain a structural explanation for the correlation of the ORR activity of the catalysts with the heat treatment temperature. Fourier transform infrared (FTIR) spectra of PANI-Fe-C (fig. S3) show that between 400° and 600°C the benzene-type (1100 cm−1) and quinone-type (1420 cm−1) structures (32) on the main PANI chain break into smaller fragments, such as C=N (1300 cm−1), which may be precursor states for ORR-active sites. The latter finding corresponds with scanning electron microscopy (SEM) results (fig. S4) indicating that in this range of heat treatment temperatures, PANI starts to lose its characteristic nanofibrous structure (fibers ~40 nm in diameter and ~200 nm in length) and gradually converts into more spherical particles. The carbon structure becomes more graphitic during the heat treatment at 900°C (HRTEM inset in fig. S4) (see below). After the treatment at an even higher temperature of 1000°C, the particle morphology becomes highly nonuniform, a change accompanied by a substantial surface area loss, as determined using Brunauer-Emmet-Teller (BET) technique.
The fuel cell polarization and stability plots for PANI-derived catalysts are shown in Fig. 3. In good agreement with electrochemical measurements, the addition of transition metals leads to a considerable activity enhancement of the catalysts relative to the metal-free PANI-C (Fig. 3A). Also in agreement with the RDE data in Fig. 2A, PANI-Fe-C exhibits higher ORR activity than PANI-Co-C. The best-performing catalyst in fuel cell testing, with an excellent combination of high ORR activity and long-term performance durability, is the more active of the two FeCo mixed-metal materials, PANI-FeCo-C(2). This catalyst also shows the highest RDE activity, matched only by PANI-Fe-C (see low current-density range in Fig. 2A, plots 6 and 7), as well as the highest maximum power density: 0.55 W cm−2, reached at 0.38 V. Unlike PANI-Fe-C, the mixed-metal catalysts maintain their high ORR activity when combined with Nafion ionomer in a fuel cell–type electrode under operation in the highly acidic environment of the fuel cell cathode, thus reflecting better stability of the PANI-FeCo-C catalyst (see below). The open-cell voltage (OCV) of a hydrogen fuel cell operated with Fe-containing PANI-derived catalysts is ~0.90 V with an air-operated cathode and ~0.95 V with a cathode operated on pure oxygen. The OCV value remains unchanged for more than 100 hours in the H2-air fuel cell (17).
A 700-hour fuel-cell performance test at a constant cell voltage of 0.4 V reveals very promising performance stability of the PANI-FeCo-C(1) catalyst at the fuel cell cathode. The cell current density in a lifetime test (Fig. 3B) remains nearly constant at ~0.340 A cm−2. The current density declines by only 3%, from the average value of 0.347 A cm−2 in the first 24 hours to 0.337 A cm−2 in the last 24 hours of the test (average current-density loss, 18 μA hour−1). Fuel cell performance durability (Fig. 3B) represents a substantial improvement over the durability recently reported by the Dodelet group (6). The initially very active catalyst in the latter work suffered from fast performance deterioration, losing ~38% of activity during 100 hours of H2-air testing at 0.40 V. Although quite durable by the standards of non–precious metal ORR catalysts, both PANI-Fe-C and PANI-Co-C are less stable than PANI-FeCo-C(1), incurring performance losses of ~90 and 130 μA hour−1, respectively. A stabilizing role of Co in the binary catalyst is a distinct possibility.
The high stability of the PANI-derived catalysts does not apply solely to constant-potential operation of an RDE (nor to constant-voltage operation of a fuel cell); it extends over potential-cycling conditions, and hence it is especially relevant to practical fuel cell systems. High cycling stability of a PANI-Fe-C catalyst at various RDE potentials and fuel cell voltages is demonstrated in Fig. 3, C and D, respectively (see also fig. S5). The cycling was carried out within a potential (RDE) and voltage (fuel cell) range of 0.6 to 1.0 V in nitrogen gas at a scan rate of 50 mV s−1 (a protocol recommended by the U.S. automotive industry). The catalyst performance loss calculated from linear regression was 10 to 39% after 10,000 RDE cycles (Fig. 3C) and 3 to 9% after 30,000 fuel cell cycles (Fig. 3D), further attesting to the high durability of PANI-derived catalysts, especially in the fuel cell cathode.
The morphology of the highly ORR-active and durable PANI-FeCo-C(1) catalyst before and after the heat treatment at 900°C (followed by acid leaching) is depicted by the SEM images in fig. S6. In this catalyst, as in PANI-Fe-C, the PANI nanofibers are replaced by a highly graphitized carbon phase during heat treatment. High-resolution transmission electron microscopy (HRTEM) and high-angle annular dark-field scanning TEM (HAADF-STEM) images of the PANI-FeCo-C(1) catalyst heat-treated at 900°C are shown in Fig. 4 and attest to the presence of diverse carbon nanostructures, also highlighted in HRTEM images in figs. S7 and S8. Metal-containing particles are to a large extent encapsulated in well-defined onion-like graphitic carbon nanoshells (Fig. 4, A and C). Such a well-defined graphitized carbon shell surrounding metal-rich particles was previously observed with iron(III) tetramethoxyphenyl porphyrin chloride (FeTMPP-Cl) when the heat treatment temperature was raised to 1000°C (33). In the latter work, the graphite shell formation was correlated to an increase in the catalyst open-circuit potential in oxygen-saturated solution. Carbon nanosheets grown over the metal particles are also observed in Fig. 4, A and B. In some cases, the metallic cobalt and/or iron sulfide phases (16) within the graphite-coated particles can be removed by acid leaching, leaving behind hollow and onion-like carbon nanoshells that are easily observable by HRTEM (Fig. 4B).
The carbon structure formed during the heat treatment, rather than being ideally graphitic, is somewhat disordered (e.g., turbostratic or mesographitic). This lattice distortion within the c-planes is reflected by a larger d-spacing of the (002) basal planes in the carbon relative to that in a well-ordered structure of graphite (34). The measured (002) d-spacing, ranging from ~0.34 to 0.36 nm, may facilitate incorporation of nitrogen into the graphitic structure and thereby enhance the number of active sites. The formation of graphene sheets appears to be closely associated with the improved durability of PANI-derived catalysts. A substantial fraction of multilayered graphene sheets in the catalyst are colocated with the particles of Fe(Co)Sx, as shown by the green and red arrows, respectively, in the complementary TEM, HAADF-STEM, and SEM images acquired from the same location in Fig. 4, D to F, respectively. Apart from likely contributing to active-site formation, the graphitization of the onion-like nanoshells and nanofibers as well as the presence and formation of graphene sheets throughout the PANI-derived catalysts may also enhance the electronic conductivity and corrosion resistance of the carbon-based catalysts (35, 36). The presence of the graphitized carbon phase in an active catalyst deserves further study, as such a phase may play a role in hosting ORR-active sites and enhancing stability of the PANI-derived catalysts.
Bridging the remaining performance gap in intrinsic activity and durability between non–precious metal catalysts and platinum in PEFCs will require determination of the active oxygen reduction site and a better understanding of the reaction mechanism.
## Supporting Online Material
www.sciencemag.org/cgi/content/full/332/6028/443/DC1
Materials and Methods
Table S1
Figs. S1 to S8
References
## References and Notes
1. B. James, J. Kalinoski, in DOE-EERE Fuel Cell Technologies Program—2009 DOE Hydrogen Program Review (www.hydrogen.energy.gov/pdfs/review09/fc_30_james.pdf).
2. J. Sinha, S. Lasher, Y. Yang, in DOE-EERE Fuel Cell Technologies Program—2009 DOE Hydrogen Program Review (www.hydrogen.energy.gov/pdfs/review09/fc_31_sinha.pdf).
3. Johnson Matthey (www.platinum.matthey.com/pgm-prices/price-charts).
4. See supporting material on Science Online.
5. We thank R. Adzic, Y. S. Kim, J. Chlistunoff, F. Garzon, R. Mukundan, H. Chung, and S. Conradson for stimulating discussions. Supported by the Energy Efficiency and Renewable Energy Office of the U.S. Department of Energy (DOE) through the Fuel Cell Technologies Program, and by Los Alamos National Laboratory through the Laboratory-Directed Research and Development Program. Microscopy research was supported by the Oak Ridge National Laboratory’s SHaRE User Facility, sponsored by the DOE Office of Basic Energy Sciences. The authors have filed a patent through Los Alamos National Laboratory on the catalysts described herein.
View Abstract
|
HOME TheInfoList
picture info
John Smeaton
John Smeaton FRS (8 June 1724 – 28 October 1792) was a British civil engineer responsible for the design of bridges, canals, harbours and lighthouses. He was also a capable mechanical engineer and an eminent physicist
[...Related Items...]
picture info
A viaduct is a bridge composed of several small spans for crossing a valley, dry or wetland, or forming a flyover. The term is conventional for a rail flyover as opposed to a flying junction or a rail bridge which crosses one feature. In Romance languages, the word viaduct refers to a bridge which spans only land. A bridge spanning water is called ponte. The term viaduct is derived from the Latin via for road and ducere, to lead
[...Related Items...]
picture info
Wind Tunnel
A wind tunnel is a tool used in aerodynamic research to study the effects of air moving past solid objects. A wind tunnel consists of a tubular passage with the object under test mounted in the middle. Air is made to move past the object by a powerful fan system or other means. The test object, often called a wind tunnel model, is instrumented with suitable sensors to measure aerodynamic forces, pressure distribution, or other aerodynamic-related characteristics. The earliest wind tunnels were invented towards the end of the 19th century, in the early days of aeronautic research, when many attempted to develop successful heavier-than-air flying machines. The wind tunnel was envisioned as a means of reversing the usual paradigm: instead of the air standing still and an object moving at speed through it, the same effect would be obtained if the object stood still and the air moved at speed past it
[...Related Items...]
picture info
Conservation Of Momentum
In Newtonian mechanics, linear momentum, translational momentum, or simply momentum (pl. momenta) is the product of the mass and velocity of an object. It can be more generally stated as a measure of how hard it is to stop a moving object. It is a three-dimensional vector quantity, possessing a magnitude and a direction. If m is an object's mass and v is the velocity (also a vector), then the momentum is
${\displaystyle \mathbf {p} =m\mathbf {v} ,}$
In SI units, it is measured in kilogram meters per second (kgm/s)
[...Related Items...]
picture info
Isaac Newton
Sir Isaac Newton PRS (25 December 1642 – 20 March 1726/27) was an English mathematician, physicist, astronomer, theologian, and author (described in his own day as a "natural philosopher") who is widely recognised as one of the most influential scientists of all time and as a key figure in the scientific revolution. His book Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), first published in 1687, laid the foundations of classical mechanics. Newton also made seminal contributions to optics, and shares credit with Gottfried Wilhelm Leibniz for developing the infinitesimal calculus. In Principia, Newton formulated the laws of motion and universal gravitation that formed the dominant scientific viewpoint until it was superseded by the theory of relativity
${\displaystyle q={\tfrac {1}{2}}\,\rho \,u^{2},}$
${\displaystyle q\;}$ = dynamic pressure in pascals, ${\displaystyle \rho \;}$ = fluid density in kg/m3---> (e.g [...More Info...] [...Related Items...] picture info Gottfried Leibniz Gottfried Wilhelm (von) Leibniz (sometimes spelled Leibnitz) (/ˈlaɪbnɪts/; German: [ˈɡɔtfʁiːt ˈvɪlhɛlm fɔn ˈlaɪbnɪts] or [ˈlaɪpnɪts]; French: Godefroi Guillaume Leibnitz; 1 July 1646 [O.S. 21 June] – 14 November 1716) was a prominent German polymath and one of the most important logicians, mathematicians and natural philosophers of the Enlightenment. As a representative of the seventeenth-century tradition of rationalism, Leibniz's most prominent accomplishment was conceiving the ideas of differential and integral calculus, independently of Isaac Newton's contemporaneous developments. Mathematical works have consistently favored Leibniz's notation as the conventional expression of calculus [...More Info...] [...Related Items...] picture info Windmill A windmill is a mill that converts the energy of wind into rotational energy by means of vanes called sails or blades. Centuries ago, windmills usually were used to mill grain (gristmills), pump water (windpumps), or both. The majority of modern windmills take the form of [...More Info...] [...Related Items...] picture info Mortar (masonry) Mortar is a workable paste used to bind building blocks such as stones, bricks, and concrete masonry units together, fill and seal the irregular gaps between them, and sometimes add decorative colors or patterns in masonry walls. In its broadest sense mortar includes pitch, asphalt, and soft mud or clay, such as used between mud bricks. Mortar comes from Latin mortarium meaning crushed. Cement mortar becomes hard when it cures, resulting in a rigid aggregate structure; however the mortar is intended to be weaker than the building blocks and the sacrificial element in the masonry, because the mortar is easier and less expensive to repair than the building blocks. Mortars are typically made from a mixture of sand, a binder, and water. The most common binder since the early 20th century is Portland cement but the ancient binder lime mortar is still used in some new construction [...More Info...] [...Related Items...] picture info George Romney (painter) George Romney (26 December 1734 – 15 November 1802) was an English portrait painter [...More Info...] [...Related Items...] picture info Dovetail Joint A dovetail joint or simply dovetail is a joinery technique most commonly used in woodworking joinery (carpentry) including furniture, cabinets. log buildings and traditional timber framing. Noted for its resistance to being pulled apart (tensile strength), the dovetail joint is commonly used to join the sides of a drawer to the front. A series of 'pins' cut to extend from the end of one board interlock with a series of 'tails' cut into the end of another board. The pins and tails have a trapezoidal shape. Once glued, a wooden dovetail joint requires no mechanical fasteners. The dovetail joint, also known as a culvertail joint, probably pre-dates written history. Some of the earliest known examples of the dovetail joint are in ancient Egyptian architecture entombed with mummies dating from First Dynasty, as well the tombs of Chinese emperors [...More Info...] [...Related Items...] picture info Navigation Navigation is a field of study that focuses on the process of monitoring and controlling the movement of a craft or vehicle from one place to another. The field of navigation includes four general categories: land navigation, marine navigation, aeronautic navigation, and space navigation. It is also the term of art used for the specialized knowledge used by navigators to perform navigation tasks [...More Info...] [...Related Items...] picture info Pyrometer A pyrometer is a type of remote-sensing thermometer used to measure the temperature of a surface. Various forms of pyrometers have historically existed. In the modern usage, it is a device that from a distance determines the temperature of a surface from the spectrum of the thermal radiation it emits, a process known as pyrometry and sometimes radiometry. The word pyrometer comes from the Greek word for fire, "πυρ" (pyro), and meter, meaning to measure [...More Info...] [...Related Items...] picture info Lift Coefficient The lift coefficient (CL, CN or Cz) is a dimensionless coefficient that relates the lift generated by a lifting body to the fluid density around the body, the fluid velocity and an associated reference area. A lifting body is a foil or a complete foil-bearing body such as a fixed-wing aircraft. CL is a function of the angle of the body to the flow, its Reynolds number and its Mach number [...More Info...] [...Related Items...] picture info Fellow Of The Royal Society Fellowship of the Royal Society (FRS, ForMemRS and HonFRS) is an award granted to individuals that the Royal Society judges to have made a "substantial contribution to the improvement of natural knowledge, including mathematics, engineering science and medical science". picture info River Tweed The River Tweed, or Tweed Water (Scottish Gaelic: Abhainn Thuaidh, Scots: Watter o Tweid), is a river 97 miles (156 km) long that flows east across the Border region in Scotland and northern England. Tweed (cloth) derives its name from its association with the River Tweed. The Tweed is one of the great salmon rivers of Britain and the only river in England where an Environment Agency rod licence is not required for angling [...More Info...] [...Related Items...]
|
# increasing bijection
Using the back-and-forth method we can construct an increasing bijection from the set of rational numbers to the set of of rational numbers except zero.
http://en.wikipedia.org/wiki/Back-and-forth_method
I would like to have a "natural" bijection. The algorithm resulting from the back-and-forth method behaves rather chaotically.
It would be nice for example to have an uniform bound on the number of steps needed to evaluate the image of any given rational number $a=\frac{p}{q}$. I'm note sure what should count as a "step" here, maybe adding or multiplying integers.
-
Do you mean "naturals without zero"? I haven't seen any mention of the exact bijection you mention. – Jason Dyer Nov 21 '09 at 3:35
@Jason: I don't think "naturals" was meant. See the link given to something much more general. – Jonas Meyer Nov 21 '09 at 9:29
Ah, I see it was a competition problem: mathlinks.ro/viewtopic.php?t=308908 – Jason Dyer Nov 21 '09 at 13:12
Choose sequences of rational numbers $a_i$ and $b_i$ strictly monotonically converging to $\sqrt{2}$ from below and above. Map $a_i$ to $-1/i$ and map $b_i$ to $1/i$. Extend linearly. This meets your criteria, if we allow ourselves to "know" where $p/q$ is with respect to the $a_i$ and the $b_i$.
-
I don't see even how to constuct your sequence $a_i$ meeting my criteria. Can we decide what is the nth decimal digit for $\sqrt 2$ in an uniformly bounded number of steps? – Manuel Silva Nov 21 '09 at 16:26
Any irrational number will do, some can be "computed" very fast, like 0.01001000100001... – sdcvvc Nov 21 '09 at 19:39
The Stern-Brocot tree gives a representation of (Q+,<) as infinite binary search tree. One can create another infinite binary search tree with 0 on top, positive rationals on right and negative ones on left. This gives a representation of (Q,<) as an infinite binary search tree. If we remove 0, then we get a sum of two trees ("two trees side by side"). The problem is to create an order isomorphism between the tree corresponding to (Q,<) and sum of two trees corresponding to (Q-{0},<).
These two trees can be merged into one as follows: let root(T), left(T), right(T) be the root, left subtree and right subtree of a tree T. The merge is defined recursively by:
root(T1)
merge(T1,T2) = / \
left(T1) root(T2)
/ \
merge(right(T1),left(T2)) right(T2)
(to be precise, this definition is coinductive)
The Stern-Brocot tree is Euclidean algorithm inside, so complexity aspect you seem to be interested in should be easy. [Foo's answer gives probably much easier analysis; I just wanted to show the combinatorial "look" at the linear orders.]
-
|
# Cheeky
Calculus Level 3
Let $$y = \ln(1-x) + z$$ and $$x = \sin z$$. Find the value of $$\dfrac{dy}{dx}$$ at $$x=0$$.
×
|
# evaluate an expression which is the arithmetic mean of first $N$ partial sums of a geometric progression
I am trying to evaluate an expression which is the arithmetic mean of first $N$ partial sums of a geometric progression.It is given as below.
$\frac{1}{N}\sum\limits_{k=0}^{N-1}(N-k)z^k$
Please suggest me some hints or ideas to proceed.
-
More generally, you can evaluate
$$\sum_{k=0}^{N-1}P(k)z^k$$
for any polynomial $P$ by using
$$(z\frac{\mathrm d}{\mathrm d z}) z^k=kz^k\;.$$
Thus, you can replace $k$ by $D:=z\frac{\mathrm d}{\mathrm d z}$ in $P\,$:
$$\begin{eqnarray} \frac{1}{N}\sum_{k=0}^{N-1}(N-k)z^k &=& \sum_{k=0}^{N-1}\left(1-\frac{k}{N}\right)z^k \\ &=& \sum_{k=0}^{N-1}\left(1-\frac{D}{N}\right)z^k \\ &=& \left(1-\frac{D}{N}\right)\sum_{k=0}^{N-1}z^k \\ &=& \left(1-\frac{D}{N}\right)\frac{z^N-1}{z-1}\;. \end{eqnarray}$$
-
|
My Math Forum Using the result of the Gaussian Integral to evaluate other funky integrals
User Name Remember Me? Password
Calculus Calculus Math Forum
December 6th, 2017, 08:29 PM #1 Newbie Joined: Dec 2017 From: California Posts: 1 Thanks: 0 Using the result of the Gaussian Integral to evaluate other funky integrals I evaluated the Gaussian integral using polar substitution, and got an answer of sqrt pi But my professor also asked us to compute the integral e^(-x^2/2) from negative to positive infinity and the integral of x^2(e^x^2) from 0 to infinity. ... and using our results from the previous step in just a few lines -of work. --How do I do that using my answer for part a?
December 7th, 2017, 05:25 AM #2 Math Team Joined: Jan 2015 From: Alabama Posts: 2,919 Thanks: 785 I presume you meant that you found $\displaystyle \int_0^\infty e^{-x^2} dx= \sqrt{\pi}$. You can then use the fact that $\displaystyle e^{-x^2}$ is symmetric about x= 0 to immediately see that $\displaystyle \int_{-\infty}^\infty e^{-x^2}dx= 2\sqrt{\pi}$. Then $\displaystyle \int_{-\infty}^{\infty} e^{-x^2/2} dx$ is easy, there's an obvious substitution. Let $\displaystyle u= \frac{x}{\sqrt{2}}$. $\displaystyle du= \frac{1}{\sqrt{2}} dx$ so $\displaystyle dx= \sqrt{2}du$. As x goes to $\displaystyle \infty$ so does u and as x goes to $\displaystyle -\infty$ so does u. The integral becomes $\displaystyle \sqrt{2}\int_{-\infty}^\infty e^{-u^2}du$. For the second, $\displaystyle \int_0^\infty x^2e^{-x^2} dx$, use "integration by parts". Let $\displaystyle u= x$ and $\displaystyle dv= xe^{-x^2}dx$. Once you have that, use integration by parts again. Last edited by skipjack; December 7th, 2017 at 05:45 AM.
Tags calculus3, evaluate, funky, gaussian, integral, integrals, result
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post ricsi046 Math Software 1 September 6th, 2014 12:30 PM mot1975 Calculus 1 April 15th, 2012 08:59 AM layd33foxx Calculus 2 December 12th, 2011 09:49 PM ChloeG Calculus 1 February 16th, 2011 02:15 PM sobadin Calculus 2 November 13th, 2008 11:24 AM
Contact - Home - Forums - Cryptocurrency Forum - Top
Copyright © 2018 My Math Forum. All rights reserved.
|
# Sub-/Super Diffusive Processes, Time-averaged MSD and Autocorrelation. Help with interpretation?
This is a two part question, cross-posted from the Math-SE becuase I thought you people might be able to help a little more with intuition:
Part 1:
In a (totally fascinating) paper studying the distance between two domains of a protein in an Molecular Dynamics simulation, the time-averaged mean square displacement of the distance between the two protein domains, $R(t)$, is given by:
$$\overline{\delta^{2}(\Delta;t)} = \frac{1}{t-\Delta} \int_{0}^{t-\Delta} [R(t'+ \Delta)-R(t')]^{2} \ dt'$$
where $\Delta$ is the "lag time" and $t$ is the total observation time of the simulation.
I'm familiar with the normal average value of a function $f(x)$ between $a$ and $b$:
$$\overline{f(x)}_{(a,b)} = \frac{1}{b-a}\int_{a}^{b} f(x) \ dx,$$
and I understand that $[R(t'+\Delta)-R(t')]^2$ is the (squared) measure of the difference in the distance between the two domains after a certain "lag time" $\Delta$, and that $\overline{\delta^{2}(\Delta;t)}$ is measuring the average of the this measure over a time $(t-\Delta)$.
For example, in a simulation with total observation time $t=100$ ps, $\overline{\delta^{2}(\Delta;t)} \propto \Delta^{1.5}$, with $\Delta \in (10^{-1}\text{ps},10^1\text{ps})$.
At the lower end, the integral is averaging the distance between the domains after a lag time of $10^{-1}$ps over ~$100$ ps. At the upper end, the integral is averaging the distance between the domains after a lag time of $10$ps, over $90$ps. What information does the relationship between $\Delta$ and $\overline{\delta^2(\Delta;t)}$ convey? The fact that the time-averaged mean square displacement goes up as the lag time goes up means... what exactly? And what would it mean if this proportionality was $\overline{\delta^{2}(\Delta;t)} \propto \Delta^{\alpha}, \text{with} \alpha<1$
To sum up a bit, what I don't understand is this:
• What information does this integral convey?
• What's the purpose of changing the time over which the square displacement is averaged in the formula?
• I get that diffusive processes have MSDs that are linear in time (i.e. $\text{MSD} \propto t$), but this process being sub-/super-diffusive means what exactly? It's parts move apart slower/faster than would be expected by just random diffusion? One of the main take-aways from the paper seems to be that the domain-separation distance is subdiffusive, which they say leads to some sort of "ageing" behavior. I must admit I don't really get why this would be surprising... The two domains are connected to each other, wouldn't it make sense that they don't move apart from one another at the same rate as simple brownian motion (diffusion) would carry them apart from one another?
Part 2:
The same paper defines the normalized auto-correlation function of the distance between the domains as: $C(\Delta;t) = C'(\Delta;t)/C'(0;t)$, where:
$$C'(\Delta;t)=\frac{1}{t-\Delta}\int_{0}^{t-\Delta}\delta R(t')\delta R(t' + \Delta) \ dt'$$
where $\delta R(t)=R(t)-\langle R \rangle$; in other words, how far the distance between the domains is from the average inter-domain distance.
Here, I have to admit more ignorance than in Part 1. I understand that the auto-correlation function is supposed to be some measure of the similarity of a function to itself at different times, but I don't understand how this function achieves that measure. I wish I had a more pointed question to ask, but I'm hoping that someone can help anyway. I understand if it's too broad.
|
# What is the minimum value of $x^²+12x$?
What is the minimum value of $x^2+12x$?
I do not know what is meant by the minimum value.
-
Why did you write that your question requires moderator attention? – Alex Becker Jan 2 '14 at 4:57
"Minimum value" refers to the smallest value that you can get from $x^2 + 12x$. – user61527 Jan 2 '14 at 4:57
How is defined the minimum of a function ? What did you try ? – Claude Leibovici Jan 2 '14 at 4:58
I've added a more appropriate tag. For future information, to get a moderator's attention you should flag the question, using the "flag" button at the bottom of the question. But you don't need a moderator to edit tags; most users can do so. So if you're not sure how to tag a question just add a comment to that effect. – Alex Becker Jan 2 '14 at 5:03
This edit pileup was ridiculous. – Alex Becker Jan 2 '14 at 5:06
I realize this has been answered but here is my two bits worth.
Look at $x^2+12 x$. It is an expression. Now pick a value for $x$, say $x=1$. Then the value of the expression is $1 + 12 = 13$. Pick another value, say $x=-10$. Then the value is $100-120 = -20$. Notice that $-20$ is less than $13$. You keep picking different values of $x$. The question asks what is the smallest value can you get?
To answer this problem, you need one fact: Smallest value a square can ever be is zero.
Now to use this we can write $$x^2 + 12 x = x^2 + 12 x +36 - 36 = (x+6)^2 - 36$$ I have not changed anything. If I put $x=-10$, I get $$(-4)^2 - 36 = -20$$ same as before. But if I look at $(x+6)^2 -36$, I can't change the $-36$ but to make $(x+6)^2$ as small as possible, I have to set $x=-6$. So the complete answer is
The minimum value of $x^2 + 12 x$ is $-36$ and this happens when $x = -6$.
The trick of writing the expression as a square plus a constant is called completion of squares and you may need it a lot. Here is the formula for this:
If you have $a x^2 + b x$ then add (and subtract) $b^2/(4 a)$ to complete the square.
-
Why the downvote? I find this very helpful to the OP (and the only answer that goes into detail on the "I do not know what is meant by the minimum value" part); wonder why someone gave it a downvote. – ShreevatsaR Jan 2 '14 at 13:51
The vertex of a parabola is its minimum.
The $x$ coordinate of the vertex can be found by the formula
$$x_v = -\frac{b}{2a}$$ So, \begin{align*} x_v &= -\frac{12}{2(1)} \\ &= -\frac{12}{2} \\ &= -6 \end{align*}
To find the $y$ coordinate of the vertex, substitute $x_v$ into $f(x)$ for $x$.
\begin{align*} y_v = f(x_v) &= x_{v}^{2} + 12x_{v} \\ &= (-6)^{2} + 12(-6) \\ &= 36 -72 \\ &= -36 \end{align*}
So, the minimum is at
$$(-6, -36)$$
-
Usually, when we have to find the minimum of a quadratic expression, we try to complete the square.
We have $y=x^2 + 12x$ . We want to find the minimum possible value of y.
$$y=x^2 + 12x \\= x^2 +2(6)(x) + 6^2 - 6^2 \\=(x+6)^2 - 36$$
The lowest possible value of the term in square i.e. $(x+6)^2$ is $0$ when $x = -6$.
So the lowest possible value of $y = x^2 + 12x$ is $-36$.
Here is a graph.
Another way to do it is by using the vertex formula, which @okarin has done.
-
HINT :
Notice that $$y=x^2+12x=(x+6)^2-36$$ represents a parabola.
This parabola has the minimum value at its vertex $(-6,-36)$.
Hence, the answer is $-36$.
-
This is a quadratic equation and can be put in the form of $y=ax^2+bx^2+c$ or $(x^2+12x+0)$. If a is negative the parabola it makes opens downwards. If a is positive like it is in this case, the parabola opens upwards. If it open upwards, it has a minimum or a maximum when opening downwards. So, the vertex is always the minimum or maximum. Minimum for $a>0$ or maximum for $a<0$. So, if you can find the vertex you will have it. As you see it can be rewritten as $(x+6)^2-36$, this is the vertex form of the parabola. The vertex form is a quadratic equation written as such, $a(x-h)^2-k$, where $(h,k)$ is the vertex. So, the minimum is $(-6,36)$.
-
There's no quadratic equation here; just a quadratic expression / function. – ShreevatsaR Jan 2 '14 at 5:12
In general, to find the minimum of a quadratic function $y = ax^2 + bx + c$, we can use a few different methods. Note that this function has a minimum iff $a \ge 0$. If $a \le 0$, it has a maximum, not a minimum.
Complete the square
Here, we try to write the function in the form $y = a \cdot (x - h)^2 + k$. Once we have this, we know that $a \cdot \left( x - h \right)^2 \ge 0$, so $y \ge k$. We can see this my graphing it as well. The vertex would be at $(h, k)$.
An example of this would be the function $y = x^2 + 12x = \left( x + 6 \right)^2 - 36$, so the minimum value is $-36$, which occurs at $x=-6$.
Formula
It is a general rule, which can be proved using the complete-the-square method that the vertex of a parabola with equation $y = ax^2 + bx + c$ occurs at $x = - \dfrac {b}{2a}$. Plugging in the $x$-value from here, we can find the $y$-coordinate of the vertex. Another proof of this uses Calculus.
Let $f(x) = ax^2 + bx + c$. Then, differentiating, we have $f'(x) = 2ax + b$. Setting this to $0$, we have $$2ax + b = 0 \implies x = - \dfrac {b}{2a}.$$
-
|
# Erdős–Burr conjecture
In mathematics, the Erdős–Burr conjecture was a problem concerning the Ramsey number of sparse graphs. The conjecture is named after Paul Erdős and Stefan Burr, and is one of many conjectures named after Erdős; it states that the Ramsey number of graphs in any sparse family of graphs should grow linearly in the number of vertices of the graph.
The conjecture was proven by Choongbum Lee (thus it is now a theorem).[1]
## Definitions
If G is an undirected graph, then the degeneracy of G is the minimum number p such that every subgraph of G contains a vertex of degree p or smaller. A graph with degeneracy p is called p-degenerate. Equivalently, a p-degenerate graph is a graph that can be reduced to the empty graph by repeatedly removing a vertex of degree p or smaller.
It follows from Ramsey's theorem that for any graph G there exists a least integer ${\displaystyle r(G)}$ , the Ramsey number of G, such that any complete graph on at least ${\displaystyle r(G)}$ vertices whose edges are coloured red or blue contains a monochromatic copy of G. For instance, the Ramsey number of a triangle is 6: no matter how the edges of a complete graph on six vertices are colored red or blue, there is always either a red triangle or a blue triangle.
## The conjecture
In 1973, Stefan Burr and Paul Erdős made the following conjecture:
For every integer p there exists a constant cp so that any p-degenerate graph G on n vertices has Ramsey number at most cp n.
That is, if an n-vertex graph G is p-degenerate, then a monochromatic copy of G should exist in every two-edge-colored complete graph on cp n vertices.
## Known results
Before the full conjecture was proved, it was first settled in some special cases. It was proven for bounded-degree graphs by Chvátal et al. (1983); their proof led to a very high value of cp, and improvements to this constant were made by Eaton (1998) and Graham, Rödl & Rucínski (2000). More generally, the conjecture is known to be true for p-arrangeable graphs, which includes graphs with bounded maximum degree, planar graphs and graphs that do not contain a subdivision of Kp.[2] It is also known for subdivided graphs, graphs in which no two adjacent vertices have degree greater than two.[3]
For arbitrary graphs, the Ramsey number is known to be bounded by a function that grows only slightly superlinearly. Specifically, Fox & Sudakov (2009) showed that there exists a constant cp such that, for any p-degenerate n-vertex graph G,
${\displaystyle r(G)\leq 2^{c_{p}{\sqrt {\log n}}}n.}$
## References
• Alon, Noga (1994), "Subdivided graphs have linear Ramsey numbers", Journal of Graph Theory, 18 (4): 343–347, doi:10.1002/jgt.3190180406, MR 1277513.
• Burr, Stefan A.; Erdős, Paul (1975), "On the magnitude of generalized Ramsey numbers for graphs", Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. 1 (PDF), Colloq. Math. Soc. János Bolyai, 10, Amsterdam: North-Holland, pp. 214–240, MR 0371701.
• Chen, Guantao; Schelp, Richard H. (1993), "Graphs with linearly bounded Ramsey numbers", Journal of Combinatorial Theory, Series B, 57 (1): 138–149, doi:10.1006/jctb.1993.1012, MR 1198403.
• Chvátal, Václav; Rödl, Vojtěch; Szemerédi, Endre; Trotter, William T., Jr. (1983), "The Ramsey number of a graph with bounded maximum degree", Journal of Combinatorial Theory, Series B, 34 (3): 239–243, doi:10.1016/0095-8956(83)90037-0, MR 0714447.
• Eaton, Nancy (1998), "Ramsey numbers for sparse graphs", Discrete Mathematics, 185 (1–3): 63–75, doi:10.1016/S0012-365X(97)00184-2, MR 1614289.
• Fox, Jacob; Sudakov, Benny (2009), "Two remarks on the Burr–Erdős conjecture", European Journal of Combinatorics, 30 (7): 1630–1645, arXiv:0803.1860, doi:10.1016/j.ejc.2009.03.004, MR 2548655.
• Graham, Ronald; Rödl, Vojtěch; Rucínski, Andrzej (2000), "On graphs with linear Ramsey numbers", Journal of Graph Theory, 35 (3): 176–192, doi:10.1002/1097-0118(200011)35:3<176::AID-JGT3>3.0.CO;2-C, MR 1788033.
• Graham, Ronald; Rödl, Vojtěch; Rucínski, Andrzej (2001), "On bipartite graphs with linear Ramsey numbers", Paul Erdős and his mathematics (Budapest, 1999), Combinatorica, 21 (2): 199–209, doi:10.1007/s004930100018, MR 1832445
• Kalai, Gil (May 22, 2015), "Choongbum Lee proved the Burr-Erdős conjecture", Combinatorics and more, retrieved 2015-05-22
• Lee, Choongbum (2017), "Ramsey numbers of degenerate graphs", Annals of Mathematics, 185 (3): 791–829, arXiv:1505.04773, doi:10.4007/annals.2017.185.3.2
• Li, Yusheng; Rousseau, Cecil C.; Soltés, Ľubomír (1997), "Ramsey linear families and generalized subdivided graphs", Discrete Mathematics, 170 (1–3): 269–275, doi:10.1016/S0012-365X(96)00311-1, MR 1452956.
• Rödl, Vojtěch; Thomas, Robin (1991), "Arrangeability and clique subdivisions", in Graham, Ronald; Nešetřil, Jaroslav (eds.), The Mathematics of Paul Erdős, II (PDF), Springer-Verlag, pp. 236–239, MR 1425217.
|
# Euler's method¶
## Modules – Ordinary Differential Equations¶
Last edited: February 9th 2020
This notebook gives a more thorough walkthrough of the theoretical aspects of Euler's method as well as providing a slightly more advanced code than our other notebook on Euler's method, which focuses on the implementation of Euler's method. We will illustrate how one can use this simple numerical method to solve higher order differential equations, as well as explaining the notions of instability and local and global truncation errors.
## Introduction¶
Solving differential equation is at the heart of both physics and mathematics. Ordinary differential equations, ODEs for short, appear in all kinds of problems in physics, whether it being Newtonian mechanics (where we try to find the equations of motion), electromagnetic theory, or quantum mechanics. However, more often than not, these ODEs do not have analytical solutions, and we have to solve them numerically. In this notebook we will give an extensive review of Euler's method, the simplest algorithm for solving ODEs. In contrast with our other notebook on Euler's method, Simple implementation of Euler's method, we will cover the algorithm quite thoroughly as we will show the derivation of the method, error estimates, and stability. Lastly, we will show how we can use Euler's method to solve higher order differential equations, by reducing them to a system of first order differential equations.
Let's start by giving a concrete example on what kind of problem we wish to solve. Consider the first order differential equation
$$\frac{d}{dt} y(t) = g(y(t), t), \label{genODE}$$
where $y(t)$ is the function we want to calculate, and $g(y(t), t)$ is a function that can depend on $y(t)$, but it also might have explicit time dependence. If we have the initial condition $y(0) = y_0$, how do we solve \eqref{genODE}? Analytically there exists a multitude of different schemes to solve these kinds of equations, all with their own limitations and areas of use. An example of this is the Integrating factor, which is a scheme that the reader might already be familiar with. However, all these analytical schemes depend on the existence of an analytical solution. By solving equation \eqref{genODE} numerically we are not restricted to these kinds of ODEs, and thus we have opened a Pandora's box of new ODEs that we can solve!
## Theory¶
### Discretization¶
One of the downsides by using numerical schemes to solve ODEs is that we have to discretize our time variables. This means that the time variable $t$, only can take a set of predetermined discrete time values, called grid points, it is no longer a continuous variable. We define this set of possible time values to be on the form
$$t_n = t_0 + nh, \quad \mathrm{with} \quad n = 0, 1, 2,..., N,$$
where $t_0$ often the time value where we know our initial condition, and $h$ is the size between adjacent discrete time values. The relation between $N$ and $h$ is given by
$$h = \frac{t_N - t_0}{N}, \label{h&N}$$
where $N + 1$ is the number of discrete time points we have in our simulation (where the + 1 appears due to our choice of zero-indexing) , while $t_N$ denote the largest time value we have in our simulation. You might think of $h$ as the coarseness of our time variable; the smaller $h$, the more grid points we need to cover the same time interval. For instance, if we want to discretize the interval $[0,1]$ such that we have grid points every 0.01 seconds, i.e. the distance between grid points is 0.01 (recall that this is our definition of $h$), we use equation \eqref{h&N}, to deduce that we need $N=100$. In general, our numerical approximation will be better if we choose a small $h$. Note as the size of $h$ decreases, the number of discrete time values between $t_0$ and $t_N$ increases. We pay for the increased level of precision by increasing the number of calculations needed, thus increasing the runtime of our program. It is highly encouraged to reflect over what the necessary coarseness of our discretized time variable before solving a problem.
### Euler's method¶
There are several ways to derive Euler's method. Here we present a proof which revolve around Taylor series, thus basic calculus is required to follow the proof. Any "nice" (We do not elaborate what this requires) function $y(t)$ can be written as
$$y(t) = \sum_{n=0}^{\infty} \frac{y^{(n)}(t_0)}{n!}(t-t_0)^n, = y(t_0) + \frac{y'(t_0)}{1!}(t-t_0) + \frac{y''(t_0)}{2!}(t-t_0)^2 + ... \label{Taylor}$$
where $t_0$ is an arbitrary value. We say that we expand $y(t)$ around $t_0$. Now we use Taylor's theorem and truncate the series at the first order
$$y(t_0 + h) = y(t_0) + \frac{y'(t_0)}{1!}h + \frac{1}{2!}h^2 y''(\tau), \label{Taylor_trunc}$$
for some $\tau \in [t_0, t_0 + h]$. Reshuffling equation \eqref{Taylor_trunc}, and solving for $y'(t_0)$ we get
$$y'(t_0) = \frac{y(t_0 + h)- y(t_0)}{h} + \mathcal{O}(h), \label{deriv}$$
where we use big O notation. Now we use the essential idea of Euler's method, from \eqref{genODE}, we know the exact expression of $y'(t_0)$ because we know that $y'(t) = g(y(t), t)$! Inserting this into \eqref{deriv} and solving for $y(t_0 + h)$ we get
$$y(t_0 + h) = y(t_0) + h g(y(t_0), t_0) + \mathcal{O}(h^2). \label{next_step}$$
Thus, by equation \eqref{next_step} we have an estimate for what the value of $y$ is at our first grid point $t= t_0 + h$! By choosing $h$ small enough (and hoping that the $\mathcal{O}$ term is not too large), our estimate will be quite precise. Neglecting the $\mathcal{O}$ term, we write
$$y(t_0 + h) \approx y(t_0) + h g(y(t_0), t_0). \label{next_step_ish}$$
If we have our initial condition at $t_0$ and denote this value as $y_0$, we can use Euler's method to find an approximation of $y$ at $t_1 = t_0 +h$. $y$ at $t_1$ is denoted as $y_1$. This approximation can be calculated by the formula
$$y_1 = y_0 + hg(y_0).$$
Now, to find the population size $y_2$ at $t_2 = t_1 + h = t_0 + 2h$, we use the same formula, but with $y_1$ instead of $y_0$
$$y_2 = y_1 + h g(y_1).$$
The most general form of Euler's method is written as
$$y_{n+1} = y_n + h g(y_n). \label{Euler}$$
Now we will illustrate how to implement Euler's method by using the same example as in our Implementation of Euler's method notebook.
$$\frac{dy}{dt} = ky(t), \label{ODE}$$
where $k=\mathrm{ln}(2)$ and $y(0) = 1$. In the previous notebook we said that $y(t)$ was the population size of a bacterial colony at time $t$, and we repeat the notation here. We can solve equation \eqref{ODE} analytically to obtain $y(t) = 2^t$, so we have something to compare our numerical results to.
In [4]:
# Importing the necessary libraries
import numpy as np # NumPy is used to generate arrays and to perform some mathematical operations
import matplotlib.pyplot as plt # Used for plotting results
# Updating figure params
newparams = {'figure.figsize': (15, 7), 'axes.grid': False,
'lines.markersize': 10, 'lines.linewidth': 2,
'font.size': 15, 'mathtext.fontset': 'stix',
'font.family': 'STIXGeneral', 'figure.dpi': 200}
plt.rcParams.update(newparams)
In [5]:
def step_Euler(y, h, f):
"""Performs a single step of Euler's method.
Parameters:
y: Numerical approximation of y at time t
h: Step size
f: RHS of our ODE (RHS = Right hand side). Can be any function that only has y as a variable.
Returns:
next_y: Numerical approximation of y at time t+h
"""
next_y = y + h * f(y)
return next_y
def full_Euler(h, f, y_0 = 1, start_t = 0, end_t = 1):
""" A full numerical aproximation of an ODE in a set time interval. Performs consecutive Euler steps
with step size h from start time until the end time. Also takes into account the initial values of the ODE
Parameters:
h: Step size
f: RHS of our ODE
y_0 : Initial condition for y at t = start_t
start_t : The time at the initial condtion, t_0
end_t : The end of the interval where the Euler method is perfomed, t_N
Returns:
y_list: Numerical approximation of y at times t_list
t_list: Evenly spaced discrete list of time with spacing h.
Starting time = start_t, and end time = end_t
"""
# Number of discretisation steps
N = int((end_t - start_t) / h)
# Following the notation in the theory, we have N+1 discrete time values linearly spaced
t_list = np.linspace(start_t, end_t, N + 1)
# Initialise array to store y-values
y_list = np.zeros(N + 1)
y_list[0] = y_0
# Assign the rest of the array using N Euler_steps
for i in range(0, N):
y_list[i + 1] = step_Euler(y_list[i], h, f)
return y_list, t_list
Now that we have our functions defined, we only need to define our RHS (Right hand side) of our differential equation, which we in the theory part denoted as $g(y)$.
In [6]:
def g(y):
"""Defines the right hand side of our differential equation. In our case of bacterial growth, g(y) = k*y
Parameters:
y: Numerical approximation of y at time t
Returns:
growth_rate: Current population size multiplied with a constant of proportionality.
In this case this is equal to ln(2)
"""
growth_rate = np.log(2)*y
return growth_rate
# Now we can find the the numerical results from Euler's method
# and compare them to the analytical solution
# Input parameters
y_0 = 1 # Initial population size, i.e. a single bacteria
h = 0.01 # Step size
t_0 = 0 # We define the time at our initial observation as 0
t_N = 10 # 10 days after our initial observation of a single bacteria
# Calculating results from Euler and plotting them
y_list, t_list = full_Euler(h, g, y_0, t_0, t_N)
plt.plot(t_list, y_list, label="Numerical", linewidth=1)
# Plotting the analytical solution derived earlier
plt.plot(t_list,np.power(2, t_list), label="Analytical", linewidth=1)
# Making the plot look nice
plt.legend()
plt.title("The population size of a bacterial colony as a function of time")
plt.xlabel(r'$t$ [days]')
plt.ylabel(r'$y$ [# bacteria]')
plt.show()
# Let's see how far off our numerical approximation is after 5 days.
last_analytical = np.power(2,t_list[-1]) # Extracting the last element of the analytical solution
last_numerical = y_list[-1] # Extracting the last element of the numerical solution
print("After 10 days, our numerical approximation of bacterias is off by: %.2f" %(last_analytical - last_numerical))
After 10 days, our numerical approximation of bacterias is off by: 24.20
We see that our model fares quite well with a step size $h=0.0 1$, as it only deviates from the analytical solution by 24 bacteria cells, or 2.4 %. Using a smaller $h$ would yield a smaller error, and this is the next theoretical aspect we will consider.
### Local and Global truncation errors¶
In the previous theory section, we showed the following relations
$$y(t_0 + h) = y(t_0) + h g(y(t_0), t_0) + \mathcal{O}(h^2) \\ y(t_0 + h) \approx y(t_0) + h g(y(t_0), t_0). \label{Euler_approx}$$
The first equation is exact, while the other is an approximation. The method of Taylor-expanding a function and truncating it at a certain order is widely used in physics (in fact it is the cornerstone in almost every field in physics). However, it is important to study the consequences of this approximation.
In numerical analysis, an important concept is the local truncation error. This describes the error we make after each time step. If we have the initial condition $y_0$, we can use our numerical scheme to find an approximation of what $y(t_0 + h)$ ought to be, denoted $y_{approx}(t_0+h)$. This is what we do in the second line above. If we then compare this approximation to the exact solution (i.e. the first line), denoted $y_{exact}(t_0 + h)$, we can find the local truncation error at the first time step, denoted $\tau_1$
$$\tau_1 = \mid y_{exact}(t_0 + h) - y_{approx}(t_0 + h) \mid. \label{local_trunc_error}$$
Using equation \eqref{Euler_approx}, we see that $\tau_1 = \mathcal{O}(h^2)$. Thus our local truncation error is of the second order, meaning that if use a $h$ that is only half of the original $h$, the local truncation error would be one forth of the original size.
Analogous to the local truncation error, we have the global truncation error. At each time step of our numerical simulation, we will have an approximation of $y(t)$ at $t=t_n$, denoted $y_{approx}(t_n)$. The global truncation error, $e_n$. is defined as
$$e_n = \mid y_{exact}(t_n) - y_{approx}(t_n) \mid. \label{global_trunc_error}$$
Equation \eqref{global_trunc_error} describes how far off our numerical scheme is to the exact solution. Note that the global truncation error is not the sum of all the local truncation errors, but rather the accumulation of errors made in each step (i.e. the sum of the local truncation errors if they were defined without the absolute value). Here we present a somewhat heuristic approach to find $e_n$. If we know our local truncation error is of the order $\mathcal{O}(h^2)$, we know that for each time step we get the error $ah^2$, where $a$ is just some constant. In order to get to $t_N$, we need to make $N$ steps, and using equation \eqref{h&N} we see that the number of steps needed is inversely proportional to $h$. Thus the accumulated error $e_N = ah^2 \frac{1}{h} = a h$. Hence, our conclusion is that the global truncation error for the Euler's method is of the order $\mathcal{O}(h)$. Note that this relation holds in general. If a ODE scheme has local truncation error $\mathcal{O}(h^{p+1})$, the global truncation error is $\mathcal{O}(h^{p})$.
Now, let's put the theory derived here into practice! The following code might slightly more technical than what we have done until now, but is well worth the effort as we rediscover our theoretical results.
In [7]:
from prettytable import PrettyTable # This is imported solely to get the output on a nice format
def trunc_errors(f, base=2, h_max_log=-1, h_min_log=-6, y_0=1, start_t=0, end_t=2):
"""A full numerical approximation of an ODE in a set time interval. Performs consecutive Euler steps
with step size h from start time until the end time. Also takes into account the initial values of the ODE.
Returns both the local and global truncation error for each step size.
Parameters:
f: RHS of our ODE
base: The base of our logspace, our h_list is created by taking: base **(h_list_log)
h_min_log: Our smallest element in our h_list is defined by: base**(h_min_log)
h_max_log: Our largest element in our h_list is defined by: base**(h_max_log)
y_0 : Initial condition for y at t = start_t
start_t : The time at the initial condition, t_0
end_t : The end of the interval where the Euler method is performed, t_N
Returns:
t: Table containing the time step size, the global truncation error and the local truncation error
"""
K = h_max_log - h_min_log + 1
h_list = np.logspace(h_min_log, h_max_log, K, base=base) # Crates an array that is evenly spaced on a logscale
t = PrettyTable(['h', 'Global truncation error', 'Local truncation error'])
for i in range(len(h_list)):
y_list, t_list = full_Euler(h_list[i], g, y_0, start_t, end_t) # Runs Euler Algorithm with a given h
analytic_list = np.power(2, t_list)
# Want to format our output nicely, thus we need to add h, Global trunc error
# and Local trunc error (for the first time step) to our row
t.add_row([h_list[i], np.abs(y_list[-1] - analytic_list[-1]), np.abs(y_list[1] - analytic_list[1])])
t.sortby = "h" # Formatting the table
t.reversesort = True
print(t)
return t
t = trunc_errors(g, 2, h_min_log=-8)
+------------+-------------------------+------------------------+
| h | Global truncation error | Local truncation error |
+------------+-------------------------+------------------------+
| 0.5 | 0.7120865983468994 | 0.06763997209312245 |
| 0.25 | 0.40885068263221713 | 0.015920319862734678 |
| 0.125 | 0.22086218920020384 | 0.003864335095264515 |
| 0.0625 | 0.11506573325497316 | 0.0009520836424172785 |
| 0.03125 | 0.05876725013275852 | 0.00023629926161827797 |
| 0.015625 | 0.029702418263456654 | 5.886135545130067e-05 |
| 0.0078125 | 0.014932231644601224 | 1.4688764678139066e-05 |
| 0.00390625 | 0.007486540203270664 | 3.66887614022815e-06 |
+------------+-------------------------+------------------------+
Clearly, the global truncation error is of order $\mathcal{O}(h)$ as each time we cut $h$ in half, the corresponding truncation error is also cut in half, while the local truncation error is one forth of the previous.
The order of the global truncation error is what defines the order of a numerical method to solve ODEs. We say that Euler's method is a first-order method. There exists a multitude of different ways to solve ODEs, where Euler's method is the simplest. In our notebook Projectile motion, we demonstrate how we can use the forth order Runge-Kutta method (We also have a notebook that describes the more theoretical aspects of Runge-Kutta methods ).
For numerical work that is more sensitive to errors (for example long simulations of planetary motions), the additional work of implementing a higher order method is well worth the investment, as the number of required time steps is often way lower.
### Instability¶
We will now demonstrate a simple theoretical example that illustrates how Euler's method may break down, i.e. the numerical solution starts to deviate from the exact solution in dramatic ways. Usually, this happens when the numerical solution grows large in magnitude while the exact solution remains small.
Let's look at the ODE
$$\frac{dy}{dt} = -y \quad \mathrm{with} \quad y(0) = 1.$$
Trivially, this has the exact solution $y(t) = \mathrm{e}^{-t}$. However if look what happens with a single step in Euler's method using equation \eqref{Euler}
$$y_{n+1} = y_n + h g(y_n) = y_n - h y_n = (1-h) y_n.$$
Observe that for $h=1$, our solution simply becomes zero right away, if $h>1$, our solution will oscillate between positive and negative values. If $h>2$ our solution will grow without bound (in absolute value), while it oscillates between positive and negative values. All three of these cases are drastically different from the exact solution! In this notebook we will not go into further detail on instability, but we chose to include this example to demonstrate how numerical methods will fail under certain conditions.
### Higher order derivatives¶
Lastly we will introduce how it is possible to use Euler's method to solve higher order differential equations. Unfortunately, there is a bit of ambiguity with regard to the nomenclature here. We have already talked about the order of our ODE solver, but now we introduce the order of the differential equation we want to solve. The order of the latter is simply the highest order derivative that appear in our ODE. For example, the ODE
$$\frac{d^3y}{dt^3} + \frac{dy}{dt} = -y,$$
is of the third order.
#### Example: Metal sphere dropped from the Thermosphere¶
In this (highly constructed) example, we will drop a metal sphere from the thermosphere, and study its trajectory towards the earth surface. The upomst layers of the thermosphere is 400 Km from the surface of the earth. Using Newtons formula for gravity, we can calculate that the gravitational constant up here is 8.70 m/s^2, while at the earth's surface it is 9.82 m/s^2. Thus, we need to update our value for the gravitational force, denoted $F_G$, while the sphere falls.
We also need to account for the drag force exerted on the sphere, denoted as $F_D$. Let's write Newton's second equation in order do find the equations of motion
$$ma = m \frac{d^2y}{dt^2} = F_D + F_G = Dv^2 - \frac{GmM}{y^2}. \label{N2_first}$$
Here we denote $D$ as the drag coefficient, $v$ as the speed of the sphere, $G$ as the (proper) gravitational constant, and $m$ and $M$ as the mass of the sphere and of the earth respectively. Note that the y-axis points away from the earth. Reshuffling equation \eqref{N2_first} slightly, we find
$$\frac{d^2y}{dt^2} = \frac{D}{m} v^2 - \frac{GM}{y^2}. \label{N2_sec}$$
This form is still not quite ready for using our algorithm from previously, so we will use a final trick. We simply note $\frac{dy}{dt} = v$ and rewrite equation \eqref{N2_sec} one final time as two equations.
$$\frac{dy}{dt} = v \\ \frac{dv}{dt} = \frac{D}{m} v^2 - \frac{GM}{y^2}. \label{SysODE}$$
We have reduced our second order differential equations to a set of two first order equations! Note that we still need two initial conditions, $y_0$ and $v_0$. Applying Euler's method on these two equations we arrive at
$$y_{n+1} = y_n + h v_n \\ v_{n+1} = v_n + h [ \frac{D}{m} v_n^2 - \frac{GM}{y_n^2}]. \label{SysODE_Euler}$$
This can be further generalized to solving a Nth order differential equations into a system of $N$ first order differential equation and is a powerful tool for solving higher order ODEs. We will now introduce the conventional notation for solving higher order ODEs by introducing the vector $\vec{w_n}$ defined as
$$\vec{w_n} = \begin{bmatrix}y_n \\ v_n\end{bmatrix}.$$
Let $f$ be a function that transforms our $w_n$ as described by equation \eqref{SysODE_Euler},
$$f(\vec{w_n} ) = f \begin{bmatrix}y_n \\ v_n\end{bmatrix} = \begin{bmatrix}v_n \\ \frac{D}{m} v_n^2 - \frac{GM}{y_n^2}\end{bmatrix}.$$
with no time dependence, such that $$\dot{\vec{w_n}}=f(\vec{w_n}).$$ For the interested reader, we expand more on this subject in the notebooks Projectile motion and Runge-Kutta methods. By implementing the function $f$ as our right hand side of our ODE and by slightly tweaking our previous functions we can study the trajectory of our sphere.
In [8]:
## Higher order differential equations
def step_Euler_high(w, t, h, f, deg):
"""Performs a single step of Euler's method on vector form.
Parameters:
w: Numerical approximation of w at time t
t: The time the Euler step is preformed at
h: Step size
f: RHS of our ODE
Returns:
next_w: Numerical approximation of x at time t+h
"""
next_w = w + h * f(w, t, deg)
return next_w
# We are going to store data in matrix form, so we illustrate the structure of the matrix here for clarity
# The matrix M will have the following form when completly filled with data:
#
# M[ROW, COLUMN]
# The number of rows is equal to the degree of the ODE, denoted k
# Here we show how it will look for the problem discussed above
# N COLUMNS
# -----------------------------------
# | y0 y1 y2 ... y_N-2 y_N-1
# | v0 v1 v2 ... v_N-2 v_N-1
#
# Writing ":" in M[ROWS, COLUMNS] such as M[:, 0] returns an array containing
# the first column. M[0, :] returns the first row
#
def full_Euler_high(h, f, init_cond, start_t=0, end_t=1):
""" A full numerical approximation of an ODE in a set time interval.Performs consecutive Euler steps
with step size h from start time until the end time. Also takes into account the initial values of the ODE
Parameters:
h: Step size
f: RHS of our ODE (vector function)
init_cond: Array containing the necessary initial conditions
start_t : The time at the initial condition
end_t : The end of the interval where the Euler method is performed
Returns:
M: Matrix with number of rows equal to the order of the ODE, and N columns
Contains the numerical approximation of the variable we wish to solve at times t_list
t_list: Evenly spaced discrete time list with spacing h, starting time = start_t, and end time = end_t
"""
deg = len(init_cond) # The order of the ODE is equal to the number of initial conditions we need
N = int((end_t - start_t) / h)
t_list = np.linspace(start_t, end_t, N + 1)
M = np.zeros((deg, N + 1)) # Matrix storing the values of the variable we wish to solve for
# (the zeroth derivative), as well as the higher order derivatives
M[:, 0] = init_cond # Storing the initial conditions
for i in range(0, N):
M[:,i + 1] = step_Euler_high(M[:, i], t_list[i], h, f, deg) # Running N Euler steps
return M, t_list
In [9]:
D = 0.0025 # Drag coefficient
m = 1 # mass of the metal sphere
M_earth = 5.97 * 10 ** 24 # Mass of the earth
G = 6.67 * 10 ** (-11) # Gravitational constant
def g(w, t, deg):
"""Defines the right hand side of our differential equation. In our case it is a vector function that
determines the equation of motion.
Parameters:
w: Numerical approximation of w at time t
t: Time, not relevant here as we have no explicit time dependence
deg: Degree of the ODE we wish to solve
Returns:
next_w: Numerical approximation of w at time t
"""
next_w = np.zeros(deg)
next_w[0] = w[1]
next_w[1] = D * w[1] ** 2 / m - G * M_earth / w[0] ** 2
return next_w
In [10]:
M, t = full_Euler_high(0.01, g, np.array([6771*10**3,0]), 0, 200)
fig = plt.figure(figsize=(18, 6)) # Create figure and determine size
ax1 = plt.subplot(121)
ax1.set_title("Trajectory of metal sphere dropped from the thermosphere")
ax1.set_xlabel(r"$t$ [s]")
ax1.set_ylabel(r"$y$ [km]")
# Only plotting the first twenty seconds of the trajectory, and rescaling y into kilometres
plt.plot(t[:2000], M[0][:2000] / 10 ** 3)
ax2 = plt.subplot(122)
ax2.set_title("Speed of the metal sphere as a function of time traveled")
ax2.set_xlabel(r"$t$ [s]")
ax2.set_ylabel(r"$v_{term}$ [m/s]")
# Plotting the speed as a function of time in the interval [10 s, 20 s] to study how the speed changes
# after the main acceleration at the start of the free fall
ax2.plot(t[1000:4000], M[1][1000:4000])
fig.tight_layout()
plt.show()
We can deem the results produced by Euler's method as valid due to our physical understanding of the problem. On the plot to the right we we can study the speed of the metal sphere as it falls towards the earth. The concept of terminal speed. i.e. when the drag force is equal to the gravitational force, is useful here. We observe that after the initial acceleration, the speed seems to flatten out at around 60 m/s. However, we know that the speed of the sphere should in fact increase even more, as the gravitational pull of the earth will pull stronger as the sphere is towards the center of the earth (remember, gravity scales as $\frac{1}{r^2}$). Let's see if we can observe this effect in our numerical solution.
In [11]:
fig = plt.figure(figsize=(12, 4)) # Create figure and determine size
plt.plot(t[10000:], M[1][10000:])
plt.title("Speed of the metal sphere as a function of time traveled")
plt.xlabel(r"$t$ [s]")
plt.ylabel(r"$v_{term}$ [m/s]")
plt.show()
We clearly observe the effect of the increasing gravitational force! For more examples on how to use physical principles to determine the validity of numerical results (i.e. Energy conservation etc.) check out our notebook on Projectile motion.
## Conclusion¶
Euler's method is a simple numerical procedure for solving ordinary differential equations, and it's simplicity manifests itself both in the straightforward numerical implementation, but also in the power of the method to solve more complex problems. For more sophisticated problems that are more prone to numerical errors, more powerful methods such as the forth order Runge-Kutta method should be considered instead (This is done in the aforementioned notebook). However, for the simple examples we considered here, it was sufficient!
|
# Is there something like Čech cohomology for p-adic varieties?
Suppose that I have a nice variety X over ℚp, with good reduction if you like, and a nice sheaf on X, say coming from a smooth group scheme G. I can cover X by some p-adic open sets Uα, for example the mod-p neighbourhoods coming from some model $\mathcal{X}$ of X. Clearly I can't expect to use Čech cohomology in a naïve way to compute the Hi(X,G) in terms of the cohomology of the Uα, because they don't overlap. But the information about how they fit together to make X is instead contained in the geometry of the special fibre of $\mathcal{X}$.
Is there a spectral sequence which calculates Hi(X,G) in terms of some sort of cohomology of the Uα, and some information about how they fit together?
## Motivation
In the situation described above, the Leray spectral sequence gives $$0 \to H^1(\mathcal{X},j_*G) \to H^1(X,G) \to H^0(\mathcal{X}, R^1 j_* G)$$ where $j\colon X \to \mathcal{X}$ is the inclusion of the generic fibre.
So in this situation I can compute H1(X,G) in terms of: $R^1 j_* G$, which I want to think of as somehow being the cohomology of the p-adic discs covering X; and the cohomology of $\mathcal{X}$, which I want to think of as saying how those discs fit together.
I would like to see how to generalise this to smaller p-adic neighbourhoods.
Supplementary question:
Should I just go away and read a book on rigid cohomology?
-
@Martin: maybe not "just", but rigid cohomology certainly seems relevant to your question. I have a sense that if I went away and read such a book, I would be better equipped to answer it. – Pete L. Clark Feb 3 '10 at 15:30
For what topology on $X$ is your "nice sheaf" a sheaf? Zariski? Etale? What do you mean by $U_{\alpha}$? Are you thinking in terms of the rigid analytification of $X$? – B. Cais Feb 3 '10 at 18:46
I suppose I'm thinking of the étale topology, but I imagine that any answer would apply to other topologies too. I'm thinking of the U_alpha just as p-adic open subsets of X(Q_p), but you're welcome to tell me to put some other structure on them. – Martin Bright Feb 4 '10 at 11:48
The first comment to make is that Cech theory is really extremely general, and can be set up to compute the cohomology of any complex of abelian sheaves on any site (provided you have coverings that are cohomologically trivial). This is explained at least somewhat in SGA4, Expose 5 and EGA III, Chap 0, section 12.
I think you should be working with the rigid analytic space attached to $X$, and not with the $\mathbf{Q}_p$-points of $X$, and the latter really has no good topology on it besides the totally disconnected one induced from the topology on $\mathbf{Q}_p$.
Let's assume that $X$ has a model $\mathcal{X}$ over $\mathbf{Z}_p$ that is smooth and proper and write $\widehat{\mathcal{X}}$ for the formal completion of $\mathcal{X}$ along its closed fiber. Then the (Berthelot) generic fiber $\widehat{\mathcal{X}}^{rig}$ of $\widehat{\mathcal{X}}$ is a rigid analytic space that is canonically identified with the rigid analytification of $X$ (using properness here). Moreover, one has a "specialization morphism" of ringed sites $$sp:X^{an}\simeq \widehat{\mathcal{X}}^{rig}\rightarrow \widehat{\mathcal{X}}$$ with the property that for any (Zariski) locally closed subset $W$ of the target, the inverse image $sp^{-1}(W)$ is an admissible open of the rigid space $X^{an}$ (called the open tube over W). In this way, coverings of the special fiber by locally closed subsets give coverings of the rigid generic fiber by admissible opens, and you can use Cech theory with these coverings and or your favorite spectral sequence to compute sheaf cohomology in the rigid analytic world. Again using properness, by rigid GAGA this cohomology agrees with usual (Zariski) cohomology on the scheme $X$ (provided your sheaf is a coherent sheaf of $\mathcal{O}_X$-modules, say).
This idea of computing cohomology using admissible coverings of the associated rigid space is a really important one as it allows you to use the geometry of the special fiber. It occurs (allowing $\mathcal{X}$ to have semistable reduction) in the work of Gross on companion forms, of Coleman on $\mathcal{L}$-invariants and most prominently in Iovita-Coleman (see their article on "Frobenius and Monodromy operators"). This latter article might be a good place to start.
I would also highly recommend the articles of Berthelot:
http://perso.univ-rennes1.fr/pierre.berthelot/publis/Cohomologie_Rigide_I.pdf
http://perso.univ-rennes1.fr/pierre.berthelot/publis/Finitude.pdf
I'd also suggest the AWS 2007 notes by Brian Conrad for learning about rigid geometry, which seems generally quite pertinent to your situation. For etale cohomology of rigid spaces, you might want to look at the article of Berkovich, though this would require learning about his analytic spaces.
In any case, I hope this is a good start.
-
Does Cech cohomology generalize the notion of a grothendieck topology completely? – Harry Gindi Feb 4 '10 at 17:34
Thank you - this looks very much like what I'm after. Perhaps I should look at those references and then go away and read a book on rigid cohomology! – Martin Bright Feb 5 '10 at 11:34
|
## FANDOM
727 Pages
While most enemies have seen few changes since Shattered branched off of the original PD, most of them have had their stats or behavior tweaked.
### General AI changesEdit
Enemies will now attempt to replicate their victim's exact footsteps when chasing them, even if that is not the shortest path to the target. This change makes it much easier to kite them and escape or reposition the fight.
## Sewers enemiesEdit
### Marsupial rat Edit
Marsupial Rat Marsupial rats are aggressive but rather weak denizens of the sewers. They have a nasty bite, but are only life threatening in large numbers.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
8 1 8 2 1 - 4 1 5 -
Rat is the basic sewer enemies with no unique abilities or remarkable stats.
### Albino rat Edit
Albino Rat This is a rare breed of marsupial rat, with pure white fur and jagged teeth.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
15 1 8 2 1 - 4 1 5 -
A rare variant of rat with higher health. Can cause bleeding for damage dealt on hit with 50% chance.
It drops a mystery meat when defeated.
### Sewer snake Edit
Sewer Snake These oversized serpents are capable of quickly slithering around blows, making them quite hard to hit. Magical attacks or surprise attacks are capable of catching them off-guard however. You can perform a surprise attack by attacking while out of the snake's vision. One way is to let a snake chase you through a doorway and then strike just as it moves into the door.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
4 0 10 25 1 - 4 2 7 -
Snake has less health than a rat, but MUCH higher evasion (actually comparable to lategame enemies). This makes it very difficult to hit it without using surprise attack or wands.
You can surprise it by hiding behind doors, attacking with ranged weapon before it notices you or using a cloak of Shadows) if you are the rogue. It can be also easly dispatched with a wand. If none of this options is possible, harmful scrolls/potions/seeds will do the job as well.
Has a 25% chance to drop a random seed when defeated.
### Gnoll scout Edit
Gnoll Scout Gnolls are hyena-like humanoids. They dwell in sewers and dungeons, venturing up to raid the surface from time to time. Gnoll scouts are regular members of their pack, they are not as strong as brutes and not as intelligent as shamans.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
12 2 10 4 1 - 6 2 8 -
Like sewer rats, gnoll scouts are just a basic, yet slightly sturdier enemies.
Gnoll scouts have a 50% chance to drop gold on death.
### Sewer crab Edit
Sewer Crab These huge crabs are at the top of the food chain in the sewers. They are extremely fast and their thick carapace can withstand heavy blows.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
15 4 12 5 1 - 7 4 9 -
Crab has quite high health, hery high armor for early game and deals the highest raw damage out ouf all Sewers creatures. It also moves twice as fast as the Hero, making ranged weapons or surprise attack more difficult to use.
You should equip T2 weapon and armor before entering depth 3 where it can spawn if possible. If not, try avoiding this enemy or using harmful potions/scrolls against it.
Has a 16.7% (16) chance to drop mystery meat when defeated.
### Slime Edit
Slime Slimes are strange, slightly magical creatures with a rubbery outer body and a liquid core. The city sewers provide them with an ample supply of water and nutrients. Because of their elastic outer membrane, it is difficult to deal more than 6 damage to slimes from any one attack.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
20 0 12 5 2 - 5 4 9 -
Slime reduce taken damage, making it difficult to defeat them with few hits. Any damage taken greater than 5 is reduced by the following formula:
$dmg taken= 4 + \frac{\sqrt{8*(dmg - 4) + 1} - 1}{2}$
In other words, the damage it will actually take based on the damage the Hero deals looks like this.
7 10 14 19 25 32 40 49 59 70 82 95 109 124 140
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Unlkie armor, which works best against low damage attacks, this form of damage reduction is most effective against high damage attacks. As you can see from the table above, you would need a 140 damage hit to one-shot this enemy and with weapons avalible in the sewers you will likely need at least 5 hits to kill it. Most instant kill methods won't be very effective against it as they deal damage equal to creature's health which will, in this case, be significantly reduced. However, DoT effects are quite powerful against it as they deal per tick is quite low and therefore not subjected to damage reduction. Just remember it's resistant to corrosion and immune to cautic ooze.
It has a 20% chance to drop random T2 weapon. The chance is reduced by 12 for every weapon obtained from this enemy so far to prevent farming.
### Caustic slime Edit
Caustic Slime This slime seems to have been tainted by the dark magic emanating from below. It has lost its usual green color, and drips with caustic ooze.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
20 0 12 5 2 - 5 4 9 Acidic
Cautic slime is a rare version of regular slime. It has 50% chance to apply cautic ooze on hit. It is therefore a good idea to stand in water while fighting this enemy to get rid of the debuff easily.
Caustic slime drops a single blob of Goo when defeated.
### Swarm of flies Edit
Swarm of Flies The deadly swarm of flies buzzes angrily. Every non-magical attack will split it into two smaller but equally dangerous swarms.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
50 0 10 5 1 - 4 3 9 Flying
Swarm of flies has significantly higher health than any other sewer enemy. After taking physical damage, it splits into two copies – each with half as much health – if there is an empty tile next to it in cardinal directions. If there is no valid space, splitting is skipped.
Try luring this enemy into a hallway before fighting it to ensure your Hero won't get surrounded. Wands aren't very effective against it, unless you make it split a few times and use wand that can harm multiple enemies.
It has a 16.7% (16) chance to drop a potion of healing, decreasing every time it splits (to 17, 18, etc). This chance is reduced for every dropped potion obtained this way. Swarms of flies can drop up to 5 potions in total per game. You might to want split the swarm as many times as possible to increase your chance of getting the potion. However, due to the drop cap it's not feasible to use it for potion farming like in vanilla.
## Prison enemiesEdit
### Skeleton Edit
Skeleton Skeletons are composed of corpses bones from unlucky adventurers and inhabitants of the dungeon, animated by emanations of evil magic from the depths below. After they have been damaged enough, they disintegrate in an explosion of bones.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
25 5 12 9 2 - 10 5 10 Undead
Skeleton is the most common prison enemy. When it dies, it explodes, dealing 6 to 12 damage to anything in a 3x3 area. Armor is twice as effective against damage dealt by this enemy.
When entering the prison you should have an armor that blocks at least 4 dmg on average to reliably protect you from the explosion unless you have a way to kill the skeleton before it gets to you.
It has a 16.67 % chance to drop a weapon piece. The chance is reduced by 12 for every weapon obtained from this enemy so far to prevent farming.
### Crazy thief Edit
Crazy Thief Though these inmates roam free of their cells, this place is still their prison. Over time, this place has taken their minds as well as their freedom. Long ago, these crazy thieves and bandits have forgotten who they are and why they steal. These enemies are more likely to steal and run than they are to fight. Make sure to keep them in sight, or you might never see your stolen item again.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
20 3 12 12 1 - 10 5 10 Undead
This enemy attacks twice per turn and attempts to steal an item from player's inventory with each attack.
• It cannot steal unique or upgraded items.
• Trying to steal a honeypot causes the pot to shatter and a golden bee will spawn to attack the thief.
• If the thief steals an item in a stack, they will take only 1 item of this type from it.
After successfully stealing an item, thieves try to run away from the Hero. However, they only move at 56 speed while holding an item, making it easier to catch up or corner them. Getting hit while running away causes them to drop a single gold coin.
It drops the stolen item when defeated. However, if they manage to get out of the Hero's FOV for several turns, they teleport and the stolen item is lost. It also has a 3% chance to drop a random ring or artifact. TThe chance is reduced by 23 for every ring or artifact obtained from this enemy so far to prevent farming.
### Crazy bandit Edit
Crazy Bandit Though these inmates roam free of their cells, this place is still their prison. Over time, this place has taken their minds as well as their freedom. Long ago, these crazy thieves and bandits have forgotten who they are and why they steal. These enemies are more likely to steal and run than they are to fight. Make sure to keep them in sight, or you might never see your stolen item again.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
20 3 12 12 1 - 10 5 10 Undead
A rare variant of the crazy thief. On successful steal cripples for 3 to 8 turns, poisons for 5 to 7 turns and blinds the Hero for 2 to 5 turns.
Has the same drops as it's common variant, however initial ring/artifact drop chance is 100%.
### Necromancer Edit
Necromancer These apprentice dark mages have flocked to the prison, as it is the perfect place to practise their evil craft. Necromancers will summon and empower skeletons to fight for them. Killing the necromancer will also kill the skeleton it summons.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
35 0 - 5 - 13 - 7 14 Undead
Unlike almost every other enemy, the necromancers are unable to directly attack their target. Instead, they summon a skeleton to fight for them; they can also heal it and grant buffs.
The summoning spell takes one turn to cast. If the tile on which a skeleton should appeared is occupied, any character on that tile is pushed away. Skeleton summoned by the necromancer appears with 20 HP out of 25, has a darker color than normal skeletons and drops nothing when defeated.
If a skeleton is already summoned, necromancer heals it instead. The healing spell restores restores 5 HP and can pass through characters between the necromancer and its skeleton. It also applies adrenaline buff for 3 turns to a skeleton on full HP.
It has a 20% (15) chance to drop a potion of healing. The chance is reduced for every previously dropped potion to prevent farming. Necromancers can drop up to 6 potions in total per game.
### Prison guard Edit
Prison Guard Once keepers of the prison, these guards have long since become no different than the inmates. They shamble like zombies, brainlessly roaming through the halls in search of anything out of place, like you! They carry chains around their hip, possibly used to pull in enemies to close range.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
40 0 - 7 12 10 4 - 12 7 14 Undead
Prison guard has the highest stats of all prison enemies, which makes it very hard to kill especially on earlier floors. It also has a 1-time use chain pull move, which will bring the Hero to melee distance and cripple for a few turns, which make it very difficult to use ranged weapons against it. This takes no time to execute allowing the guard to attack the Hero right after pulling it. The chain cannot pull things into chasms, however it can chain you into an identified trap.
Consider using harmful scrolls/potions/seeds against this enemy if your gear isn't strong enough. Just be careful with harmful plants and blobs as you can get pulled into them.
It has a 20% chance to drop a random armor piece when defeated. The chance is reduced by 12 for every armor piece obtained from this enemy so far to prevent farming.
### DM-100 Edit
DM-100 The DM-100 is an early model of dwarven 'defense machine' which was designed to protect dwarven miners in the caves below this prison. Their electrical shocks proved too weak however, and so they were gifted to the human prison above. The warden initially deemed them too cruel to use, but as the prisoners became more unruly they became a necessity.
HP Armor Accuracy Evasion Damage EXP EXP cap Properties
20 0 - 4 11 8 Melee: 2 - 8
Magic: 3 - 10
6 13 Inorganic, Electric
This enemy resembles vanilla gnoll shaman, just with different name and sprite and slightly modified stats. It attacks from distance using lightning bolts whenever its target is not in melee range, dealing 3-10 magic damage. This lightning attack doesn't arc to adjacent characters nor gets more powerful in the water.
It has 33% chance to drop a random scroll when defeated.
## Caves enemiesEdit
### Vampire bat Edit
Vampire Bat These brisk and tenacious inhabitants of the caves are much more dangerous than they seem. They replenish their health with each successful attack, which allows them to defeat much larger opponents.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
30 0 - 4 16 15 5 - 15 7 15 Flying
The most common enemy in caves that occasionaly spawns in prison as well. After this enemy attacks, it heals itself for (damage - 4) HP. It can also fly and moves two tiles per turn, making ranged weapons and surprise attacks more difficult to use against it.
This makes it nearly impossible to kill without a good enough melee weapon or armor as it can heal faster than the hero can damage it. Armor that blocks at least 6 dmg is needed to reliably prevent its healing.
It has a 16.67% (16) chance to drop a potion of healing. The chance is reduced for every previously dropped potion to prevent farming. Bats can drop up to 7 potions in total per game.
### Gnoll brute Edit
Gnoll Brute Brutes are the largest, strongest, and toughest of all gnolls. When mortally wounded, they go berserk, gaining temporary shielding and a large boost to damage.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
40 0 - 8 20 15 5 - 25; 15-40 when enraged 8 15 -
When brute's health is reduced to 0, it becomes enraged. This doesn't happen if it's killed by falling or if corruption enchantment procs on it's death.
Brute Rage A surge of physical power, adrenaline enhanced both attack and movement speed. This gnoll brute is dying, but wants to take you with it!\n\nThe brute will die when its shielding wears off, either due to time or if it is damaged. In the meantime, it will deal hugely increased damage, so watch out! Shield remaining: X
When enraged, it gains 24 points of shielding, becomes immune to terror and its damage increases greatly. However, it loses 4 shield points every turn and therefore dies 6 turns after it enters this state. To avoid taking a hit from an enraged brute you can usually just run from it until it dies for good.
It has a 50% chance to drop a pile of gold when defeated.
### Armored brute Edit
Gnoll Brute Brutes are the largest, strongest, and toughest of all gnolls. When mortally wounded, they go berserk, gaining temporary shielding and a large boost to damage.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
40 6 - 10 20 20 5 - 25; 15-40 when raging 8 15 -
A rare variant of gnoll brute with much higher armor. It gains the same shielding when raging, but it only looses one shield point every 3 turns and can therefore survive for 72 turns before dying for good. If you don't want to fight it when enraged, get out of its FOV and hope you won't stumble upon it again before it dies.
It drops a random mail armor(34 chance) or plate armor(14 chance) when defeated.
### Gnoll shaman Edit
In Shattered PD gnoll shamans only appear in the Caves and are much stronger than in vanilla. They cast magic bolts when their targets are not in melee range, effects of which depend on the color of the mask they are wearing.
It has a 3% chance to drop a random wand. The chance is reduced by 23 for every wand obtained from this enemy so far to prevent farming.
#### Red shamanEdit
Gnoll Shaman Gnoll shamans are intelligent gnolls who use battle spells to compensate for their lack of might. They have a very respected position in gnoll society, despite their weaker strength. This shaman is wearing a red mask, and will use magic that temporarily weakens your attacking power.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
35 0 - 6 18 15 Melee: 5 - 10
Magic: 6-15
8 16 -
A common variant of the gnoll shaman (40% chance to spawn) wearing a red mask. Its magic bolt has 50% chance to apply weakness debuff on target decreasing its physical damage by 33% for 20 turns.
#### Blue shamanEdit
Gnoll Shaman Gnoll shamans are intelligent gnolls who use battle spells to compensate for their lack of might. They have a very respected position in gnoll society, despite their weaker strength. This shaman is wearing a blue mask, and will use magic that temporarily increases the damage you take.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
35 0 - 6 18 15 Melee: 5 - 10
Magic: 6-15
8 16 -
A common variant of the gnoll shaman (40% chance to spawn) wearing a blue mask.
Its magic bolt has a 50% chance to apply the vulnerable debuff on target, increasing physical damage taken by 33% for 20 turns.
#### Purple shamanEdit
Gnoll Shaman Gnoll shamans are intelligent gnolls who use battle spells to compensate for their lack of might. They have a very respected position in gnoll society, despite their weaker strength. This shaman is wearing a purple mask, and will use magic that temporarily reduces your accuracy and evasion.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
0 - 35 0 - 6 18 15 Melee: 5 - 10
Magic: 6-15
8 16 -
An uncommon variant of the gnoll shaman (20% chance to spawn) wearing a purple mask.
Its magic bolt has a 50% chance to apply the hexed debuff on target, decreasing its accuracy and evasion by 20% for 30 turns.
### Cave spinner Edit
Cave Spinner These greenish furry cave spiders try to avoid direct combat. Instead they prefer to wait in the distance while their victim struggles in their excreted cobweb, slowly dying from their venomous bite. They are capable of shooting their webs great distances, and will try to block whatever path their prey is taking.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
0 - 50 6 20 14 10 - 25 9 16 -
It has a 12.5% chance to drop a mystery meat when defeated.
### DM-200 Edit
DM-200 The DM-200 is the second generation of dwarven 'defense machine', which was designed to protect dwarven miners in the caves and city below. They are much larger and bulkier than their predecessors, and attack with devastating hydraulic fists. Their increased size is also their primary weakness, as they are unable to fit into the narrow tunnels and doorways of the dungeon. The dwarves were able to compensate for the DM-200's lack of mobility by allowing them to vent their toxic exhaust fumes at distant enemies, or enemies they cannot reach.
HP Armor Accuracy Evasion Damage Exp EXP Cap Properties
0 - 70 8 25 8 10-25 9 17 Large, Inorganic
This enemy can use gas attack againts targets it cannot reach, creating 100 units of toxic gas on each tile on the path to the target's position with a 30 turn cooldown.
It has a 12.5% chance to drop a weapon or random armor piece using Dwarf City tier distribution. The chance is reduced by 12 for every armor piece obtained from this enemy so far to prevent farming.
### DM-201 Edit
DM-201 The dwarves briefly experimented with more heavily emphasizing the DM-200's lack of mobility and created some DM models that are entirely stationary. The DM-201 is a retrofitted DM-200 which acts as a sentry turret that has no movement ability. In exchange for the lack of mobility, DM-201s have significantly more durability and attacking power. As DM-201s have no engine to vent exhaust from, the dwarves instead outfitted them with corrosive gas grenades! DM-201s are careful with these grenades however, and will only lob them when attacked from a distance.
HP Armor Accuracy Evasion Damage Exp EXP Cap Properties
120 0 - 8 25 8 15-25 9 17 Immovable, Inorganic, Large
A rare version of DM-200. This enemy is immobile, but uses ranged attack which creates corrosive gas at enemy position and around it dealing 8 damage initially.
It drops a cursed metal shard when defeated.
## Dwarven Metropolis enemies Edit
### Dwarven ghoul Edit
Dwarven Ghoul As dwarven society slowly began to collapse, and the current king of the dwarves seized absolute power, those who were weak or who resisted him were not treated well. As the king grew more adept at wielding dark magic, he bent these dwarves to his will, and now they make up the footsoldiers of his army. Ghouls are not especially strong on their own, but always travel in groups and are much harder to kill in large numbers. When a ghoul is defeated, it will rise again after a few turns as long as another ghoul is nearby.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
45 0 - 4 24 20 16-22 5 20 Undead
The most common enemy in dwarven city that occasionaly spawns in caves as well. While ghouls have lower HP than other city enemies, they make up for it by always moving through the dungeon in pairs.
A ghoul won't die immediately after getting its HP reduced to 0 as long as there is another ghoul in its FOV. Instead, they will kneel and wait until they recover. This doesn't happen if it is corrupted or killed by a chasm. It rises back to life with X HP after 5 turns, increasing by 5 every time it rises. When it is defeated for good, it drops a pile of gold.
### Elemental Edit
There are multiple types of elementals, each having different weakneses and resistences and applying different debuff on attack.
#### Fire elemental Edit
Fire elemental Wandering fire elementals are a byproduct of summoning greater entities. They are too chaotic in their nature to be controlled by even the most powerful demonologist.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
65 0 - 5 25 20 16 - 26 10 20 Fiery, flying
A common variant of elemental (39.2% chance to spawn). Its ranged attacks sets the target on fire. Its melee and ranged attacks have a 50% chance to set the target on fire.
It has a 12.5% chance to drop a potion of liquid flame when defeated.
#### Frost elemental Edit
Frost Elemental Elementals are chaotic creatures that are often created when powerful occult magic isn't properly controlled. Elementals have minimal intelligence, and are usually associated with a particular type of magic. Frost elementals are a common type of elemental which weakens enemies with chilling magic. They will chill their target with melee and occasional ranged attacks. Their magic is much more effective in water.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
65 0 - 5 25 20 16 - 26 10 20 Icy, Flying
A common variant of elemental (39.2% chance to spawn). Its ranged attacks chill the target for 3 turns or 5 turns if the target is in water. It's melee attack chills with 33% chance like the ranged attack.
It has a 12.5% chance to drop a potion of frost when defeated.
#### Shock elemental Edit
Shock Elemental Elementals are chaotic creatures that are often created when powerful occult magic isn't properly controlled. Elementals have minimal intelligence, and are usually associated with a particular type of magic. Shock elementals are a less common type of elemental which disrupts its enemies with electricity and flashes of light. In melee they will arc electricity to nearby enemies, and deal bonus damage to their primary target if they are in water. They will also occasionally focus a ranged blast of light at their target, temporarily blinding them.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
65 0 - 5 25 20 16 - 26 10 20 Electric, Flying
An uncommon variant of elemental (19.6% chance to spawn). It's ranged attacks blinds the target for 5 turns. It's melee also damages nearby creatures by 40% of initial damage and deals 40% extra damage if attacked target is in water.
It has a 25% chance to drop a scroll of recharging when defeated.
#### Chaos elemental Edit
Chaos Elemental Elementals are chaotic creatures that are often created when powerful occult magic isn't properly controlled. Elementals have minimal intelligence, and are usually associated with a particular type of magic. Chaos elementals are rare and dangerous elementals which haven't stabilized to a particular element. They will unleash wild unpredictable magic when they attack.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
65 0 - 5 25 20 16 - 26 10 20 Flying
A rare variant of elemental (2.0% chance to spawn). Its melee and ranged attacks trigger a random cursed wand effect exept polymorph and fake crash.
It drops a scroll of transmutation when defeated.
### Dwarf warlock Edit
Dwarf Warlock As the dwarves' interests shifted from engineering to arcane arts, warlocks came to power in the city. They started with elemental magic, but soon switched to demonology and necromancy. The strongest of these warlocks seized the throne of the dwarven city, and his cohorts were allowed to continue practising their dark magic, so long as they surrendered their free will to him. These warlocks possess powerful disruptive magic, and are able to temporarily hinder the upgrade magic applied to your equipment. The more upgraded an item is, the more strongly it will be affected.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
70 0 - 8 25 18 Melee: 16 - 22
Magic: 12 - 18
It has a 50% chance to drop a random potion when defeated.
### Dwarf monk Edit
Dwarf Monk These monks are fanatics, who have devoted themselves to protecting their king through physical might. So great is their devotion that they have totally surrended their minds to their king, and now roam the dwarven city like mindless zombies. Monks rely solely on the art of hand-to-hand combat, and are able to use their unarmed fists both for offense and defense. When they become focused, monks will parry the next physical attack used against them, even if it was otherwise guaranteed to hit. Monks build focus more quickly while on the move, and more slowly when in direct combat.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
68 0 - 2 30 30 12 - 25 11 21 Undead
It has a 8.3% chance to drop a ration of food when defeated.
### Senior monk Edit
Senior Monk These monks are fanatics, who have devoted themselves to protecting their king through physical might. So great is their devotion that they have totally surrendered their minds to their king, and now roam the dwarvern city like mindless zombies. This monk has mastered the art of hand-to-hand combat, and is able to gain focus while moving much more quickly than regular monks. When they become focused, monks will parry the next physical attack used against them, even if it was otherwise guaranteed to hit. Monks build focus more quickly while on the move, and more slowly when in direct combat.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
68 0 - 2 30 30 16 - 24 11 21 Undead
It has a 8.3% chance to drop a pasty when defeated.
### Stone golem Edit
Stone golem Golems are an attempt to correct the previous flaws in dwarven machinery via the dwarves newfound use of magic. they are much more compact and efficient compared to the DM-300 while still being very deadly. While golems are still too large to fit into passageways, the dwarves have given them new magical abilities to compensate. Golems can teleport themselves between rooms, and teleport enemies to them when they are out of reach.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
85 0 - 12 28 18 25 - 40 12 22 Inorganic
It has a 12.5% chance to drop a weapon or random armor piece using Demon Halls tier distribution. The chance is reduced by 12 for every armor piece obtained from this enemy so far to prevent farming.
## Demon Halls enemiesEdit
### Succubus Edit
Succubus The succubi are demons that look like seductive (in a slightly gothic way) girls. Using its magic, the succubus can charm a hero, who will become unable to attack anything until the charm wears off. When succubi attack a charmed hero, they will steal their life essence.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
80 0 - 10 40 25 22 - 30 12 25 Demonic
It has a 33% chance to drop a random scroll when defeated. However, all scrolls are equally likely to be dropped regardless of rarity exept scroll of upgrade.
### Evil eye Edit
Evil Eye Evil Eyes are floating balls of pent up demonic energy. While they are capable of melee combat, their true strength comes from their magic. After building energy for a short time an Evil Eye will unleash a devastating beam of energy called the deathgaze. Anything within the Evil Eye's sights will take tremendous damage, wise adventurers will run for cover.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
100 0 - 10 30 20 Melee: 20 - 30
Magic: 30 - 50
13 25 Demonic, flying
It drops two dewdrops (50% chance), random runestone (25% chance) or random seed (25% chance) when defeated.
### Scorpio Edit
Scorpio These huge arachnid-like demonic creatures avoid close combat by all means, firing crippling serrated spikes from long distances.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
95 0 - 16 36 24 24 - 36 14 26 Demonic
It has a 50% chance to drop a random potion when defeated. However, all potions are equally likely to be dropped regardless of rarity exept potion of healing and potion of strength.
### Acidic scorpio Edit
Scorpio These huge arachnid-like demonic creatures avoid close combat by all means, firing crippling serrated spikes from long distances.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties 95 0 - 16 36 24 24 - 36 14 26 Demonic, acidic
It drops a potion of experience when defeated. Note: This is currently bugged and acidic scorpio has similar drops as regular one.
### Ripper demon Edit
Ripper Demon These horrific creatures are the result of the many dwarven corpses in this region being put to use by demonic forces. Rippers are emaciated ghoulish creatures that resemble dwarves, but with broken bodies and long sharp claws made of bone. Ripper demons are not particularly durable, but they are agile and dangerous. They are capable of leaping great distances to quickly reach targets before goring them with their claws.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
60 0 - 4 30 22 15-25 9 -2 Demonic, Undead
This enemy attacks at double speed. If it's at least 3 tiles away from it's target it will leap at target's location. This attack is telegraphed giving the Hero 1 turn to move away. If the demon leaps onto it's target, it causes bleeding for 75% of it's attack damage. It doesn't spawn naturally, only from demon spawners. Their enemy cap per level is separate from the one used for regular enemies.
### Demon spawner Edit
Demon Spawner This twisting amalgam of dwarven flesh is responsible for creating ripper demons from the dwarves that have died in this region. Clearly the demons see no issue with using every resource they can against their enemies. While visually terrifying, demon spawners have no means of moving or directly defending themselves. Their considerable mass makes them hard to kill quickly however, and they will spawn ripper demons more quickly when they are under threat. Demon spawners seem to be connected to some central source of demonic power. Killing them may weaken it.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
120 0 - 12 - 0 - 25 29 Immovable, Miniboss, Demonic
You can find this enemy in a unique room type, one per each depth of demon halls. It doesn't do anything besides spawning ripper demons every 60 turns in depth 21, 53.33 turns in depth 22, 46.66 turns in depth 23 and 40 turns in depth 24. Taking damage reduces demon spawning cooldown. It is however possible to prevent it from spawning more demons by occupying every tile adjacent to it. It doesn't spawn over time, only during level generation. It caps maximum damage taken at once at 20 similarly to the slimes in sewers. While the damage resistance and miniboss property prevents the Hero from killing it too quickly, a pitfall trap can kill it instantly like every other non flying creature.
It is advised to kill as many demon spawners as possible, because each of them gives the Yog a 25% chance to spawn a ripper demon instead of god's larva during the final fight. It also always drops a potion of healing when defeated.
## Non-standard enemies Edit
Enemies that do not follow the regular spawning rules or can appear in multiple chapters (areas) of the game. Their stats usually depend on the depth they spawn on.
### Animated statue Edit
Animated Statue You would think that it's just another one of this dungeon's inanimate statues, but its red glowing eyes give it away. While the statue itself is made of stone, the it's wielding looks real.
### Armored statue Edit
Animated Statue You would think that it's just another one of this dungeon's inanimate statues, but its red glowing eyes give it away. While the statue itself is made of stone, the it's wielding looks real.
Animated statues have 1/10 chance to spawn wearing random enchanted armor. These statues have the same base armor, but it's increased by the armor they are wearing and their health is doubled. This makes them incredibly hard to kill.
### Giant piranha Edit
Giant Piranha These carnivorous fish are not natural inhabitants of underground pools. They were bred specifically to protect flooded treasure vaults.
### Wraith Edit
Wraith A wraith is a vengeful spirit of a sinner, whose grave or tomb was disturbed. Being an ethereal entity, it is very hard to hit with a regular weapon.
### Mimic Edit
Mimic Mimics are magical creatures which can take any shape they wish. In dungeons they almost always choose a shape of a treasure chest, because they know how to beckon an adventurer.
### Golden mimic Edit
Golden Mimic Mimics are magical creatures which can take any shape they wish. In dungeons they almost always choose a shape of a treasure chest, in order to lure in unsuspecting adventurers. Golden mimics are tougher mimics which try to attract the strongest adventurers. They have better loot, but are also much stronger than regular mimics.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
(1+L)*6 1+L/2 6+L 2+L/2 Normal attack: 1+L to 2+2*L
Surprise attack: 2+2*L
0 0 -
L=1.33*Depth
More powerful version of mimic disguised as a golden chest. Item dropped by it is never cursed and has 50% chance to have an extra upgrade.
### Crystal mimic Edit
Crystal Mimic Mimics are magical creatures which can take any shape they wish. In dungeons they almost always choose a shape of a treasure chest, in order to lure in unsuspecting adventurers. Crystal mimics are trickier than their regular cousins, and prefer to avoid conflict while stealing loot. They will attempt to sprint away once discovered, and have the ability to reposition enemies when they attack.
HP Armor Accuracy Evasion Damage EXP EXP Cap Properties
(1+L)*6 1+L/2 6+L 2+L/2 Normal attack: 1+L to 2+2*L
Surprise attack: 2+2*L
0 0 -
L=Depth
This rare variant of mimic replaces one of the two crystal chest in treasure room. Its surprise attack can steal a random item from Hero's backpack (like a crazy thief, but with much greater chance of success). After being revealed, it gains haste buff for two turns and tries to run away with it's content. If it gets outside player's FOV, it disappears for good. It only fights the hero when cornered and it can randomly teleport nearby enemies around when attacking. Item dropped by it is never cursed.
### Golden bee Edit
Golden Bee Despite their small size, golden bees tend to protect their home fiercely. This one is very mad, better keep your distance.
When a potion of honeyed healing is used:
This one has been placated, and seems to want to follow you.
## Enemy properties Edit
Many enemies have an extra internal value that determines some of the player's interactions with it, usually by adding extra resistance or even full immunity to certain factors or elements.
Note: Resistance in game's code means a static 50% effect reduction, whereas immunity means 100% reduction.
Enemies:
• Skeleton
• Crazy thief
• Crazy bandit
• Prison guard
• Dwarven ghoul
• Dwarf warlock
• Dwarf monk
• Senior monk
• Stone golem
• King of Dwarves
• Ripen demon
### Demonic Edit
• Takes 13 bonus damage from wand of prismatic light and ~ 23 bonus damage from holy bombs.
Enemies:
• Fetid rat
• Goo
• Ripen demon
• Succubus
• Evil eye
• Scorpio
• Acidic scorpio
• Yog-Dzewa (the Eye)
• Burning fist
• Soiled fist
• Rotting fist
• Rusted fist
• Bright fist
• Dark fist
• God's larva
• Mimic
• Golden mimic
• Crystal mimic
### Flying Edit
• Does not trigger traps, can move over chasm squares and traps (also immune to ground-based trap effects if triggered underneath), and does not trample grass or plants.
Enemies:
• Swarm of flies
• Newborn elemental
• Tengu
• Vampire bat
• Fire elemental
• Evil eye
### Inorganic Edit
Enemies:
• Animated statue
• Skeleton
• DM-100
• DM-200
• DM-201
• DM-300
• Pylon
• Stone golem
### Fiery Edit
Enemies:
• Fire elemental
• Burning fist
### Acidic Edit
Enemies:
• Goo
• Acidic scorpio
• Rotting fist
### Electric Edit
Enemies:
• DM-100
• Pylon
• Shock elemental
• Bright fist
### Large Edit
• Cannot move through one tile wide spaces.
### Miniboss Edit
• Resistant to rot darts and weakening trap
• Immune to polymorph (scroll of polymorph or cursed wand effect) and assasint's instant kill move
• Cannot be corrupted, only doomed
Enemies:
• Fetid rat
• Gnoll trickster
• Giant crab
• Rot lasher
• Rot heart
• Newborn elemental
• Pylon
• Demon spawner
• Burning fist
• Soiled fist
• Rotting fist
• Rusted fist
• Bright fist
• Dark fist
### Boss Edit
Enemies:
• Goo
• Tengu
• DM-300
• Dwarf King
• Yog-Dzewa (the Eye)
### Immovable Edit
Enemies:
• Rot lasher
• Rot heart
• DM-201
• Pylon
• Demon spawner
• Yog-Dzewa (the Eye)
### Blob Immune Edit
• Immune to blob/environmental effects, such as gases
Enemies:
• Giant piranha
## History Edit
Version Change
v0.2.3 Changed:
• Enemies drop less potions of healing over time to make farming more difficult
• Golden bee (non-standard enemy); unlike in the original, it will defend the shattered honeypot it came from and hit anything that comes near it.
v0.3.0 Changed:
• Mimic are now less common, but stronger and give better loot
v0.3.1 Changed:
• Monks now always disarm after 4 to 8 hits instead of having a constant 1/6 chance to disarm
• Prison guard (prison enemy)
Changed:
• Gnoll shaman's ranged attack can now be used every turn
• Thieves can now escape with a stolen item
• Marsupial rats damage reduced to 1-4 from 1-5, evasion reduced to 2 from 3
• Gnoll scout accuracy reduced to 10 from 11
• Sewer crabs less likely to spawn on floor 3, exp reward increased to 4 up from 3
• Swarm of flies health reduced to 50 down from 80, accuracy reduced to 10 down from 12, moved to sewers
v0.4.1 Changed:
• Lategame enemies now deal more damage to compensate for more powerful and reliable armor
• Evil eye's deathgaze now much stronger, but takes 2 turns to cast and has cooldown
v0.4.2 Changed:
• Skeletons are no longer immune to grim
v0.4.3 Changed:
• Improved pathfinding
v0.6.1 Changed:
• New sprite for mimics
• Improved pathfinding
• New enemy properties to handle resistances and immunities
Changed:
• Enemy resistances now always provide 50% damage/duration reduction
• Animated statue no longer immune to vampiric, now inorganic
• Vampire bat no longer resistant to vampiric
• Cave spinner no longer immune to rooting
• Evil eye no resistant to vampiric
• Scorpio no longer resistant to poison and vampiric, acidic scorpio now ACIDIC
• Fire elemental no longer immune to wand of fireblast or healed by burning, now fiery, not demonic
• Skeleton and stone golem now inorganic
• Crazy thief and prison guard now undead, no longer demonic
• Golden bee now immune to poison and amok
v0.6.4 Changed:
• Healing potion dropchance reduction is now more harsh
• Enemies killed by falling only award 50% exp
v0.6.5 Changed:
• Skeleton explosion damage increased to 6-12 from 2-10, but armor is now twice as effective against it
• Skeleton weapon drop chance reduced from 20% to 12.5%
v0.7.0 Changed:
• Succubus now heals upon hitting a charmed enemy to compensate for charm being nerfed
• Crazy thief can drop any ring/artifact instead of only master thief's armband
• Golden bee can now be turned into an ally with elixir of honey healing
v0.7.3 Changed:
• Golden bees will now protect the nearest available shattered honeypot rather than only the one they came from.
• Snake (sewer enemy)
• Slime and its rare variant cautic slime (sewer enemy)
• Necromancer (prison enemy)
Changed:
• Enemy spawns now much more consistent
• Guard exp reward increased to 7 from 6, but it no longer drops potions of healing
• Crab damage reduced to 1-7 down from 1-8
• Albino rat exp reward increased to 2 from 1 and it now drops mystery meat
• Skeletons no longer rarely spawn in the sewers
• DM-100 (replaces gnoll shaman in prison)
• DM-200 and its rare variant DM-201 (caves enemy)
• Multiple variants of gnoll shaman
• Dwarven ghoul (city enemy)
• Multiple variants of elemental
• Demon spawner and ripper demons (halls enemies)
• Armored statue, rare variant of animated statue
• Crystal mimic, rare variant of mimic
Changed:
• Prison guard accuracy reduced to 12 from 14, armor reduced to 0-7 from 0-8
• Necromancer evasion increased to 13 from 11
• Vampire bat damage reduced to 5-15 from 5-18, heals only for damage - 4 after a succesful attack
• Gnoll brute damage reduced to 5-25 from 6-26 it starts raging at 0 HP, gains shielding upon raging and raging damage reduced to 15-40 from 15-45, only immune to terror when raging
• Shielded brute renamed to armored brute, evasion reduced to 15 up from 20, armor increased to 6-10 from 0-10, survives raging longer than normal brute, now drops scale or plate armor
• Gnoll shaman completely reworked, now spawns in caves only
• Cave spinner accuracy increased to 22 from 20, evasion increased to 17 from 14, now can shoot web at tiles next to target
• Fire elemental health reduced to 60 from 65, can now ignite target from distance, potion drop chance increased to 12.5% from 10%
• Dwarf monk now builds up focus to parry attacks instead of disarming, no longer immune to amok and terror
• Senior monk now builds up focus faster than normal monk instead of causing paralysis, drops pasty
• Warlock melee damage reduced to 12-18 from 16-22, magic attack applies degraded debuff instead of weakened, no longer resistant to grim
• Stone golem reworked, now large, can teleport attackers and itself
• Succubus no longer immune to sleep, scroll drop chance reduced to 33% from 50%, but can now drop any scroll excep upgrade and identify
• Evil eye exp cap increased to lvl 26 from lvl 25, no longer resistant to grim, now always drops 2 dewdrops (50%), random seed (25%) or random runestone (25%) rather than just 1 dewdrop
• Scorpio exp cap increased to lvl 27 from lvl 25, potion drop chance increased to 50% from 20% and removed drop cap, can now drop any potion excep strength and healing
• Acidic scorpio now applies cautic ooze on any attacker or target instead of reflecting damage to attacker, always drops potion of experience
• Golden bee now immune to poison and amok
• Mimic reworked, can now be distinguished from a normal chest
• Super mimic replaced by golden mimic, can now spawn upon floor generation like normal mimic
• Wraith no longer immune to grim and terror
v0.8.1 Changed:
• Slime weapon drop chance increased to 20% from 10%, but is reduced with each drop, weapon can now be upgraded
• Crazy bandit now blinds for 5 turns up from 2-5 turns
• Skeleton weapon drop chance increased to 16.67% from 12.5%, but is reduced with each drop, weapon can now be upgraded
• DM-100 scroll drop chance reduced to 25% from 33%
• Prison guard armor drop chance increased to 20% from 16.67%, but is reduced with each drop, armor can now be upgraded
• DM-200 now has a 12.5% chance (decreasing with each drop) to drop a weapon or random armor piece using Dwarf City tier distribution
• Dwarven ghoul now has a 20% chance to drop a pile of gold
• Dwarf warlock potion drop chance reduced to 50% from 83%
• Stone golem now has a 12.5% chance (decreasing with each drop) to drop a weapon or random armor piece using Demon Halls tier distribution
• Succubus now charms for 5 turns up from 3-4 turns
v0.8.2 Changed:
• Evil eye no longer immune to terror
Community content is available under CC-BY-SA unless otherwise noted.
|
### Special Beam Physics Seminar
Tuesday, August 15, 2006, 2:30 PM
ARC, Room 231/233
First Demonstration of High Gain Lasing and Polarization Switch
with a Distributed Optical
Klystron FEL at Duke University
Dr. Ying Wu
Duke University
The FEL gain can be significantly increased using a distributed optical klystron (DOK) FEL with multiple wigglers and bunchers. The enhanced FEL gain of DOK FELs opens the door for storage ring based FEL oscillators to operate in the VUV region toward 150 nm and beyond. This presentation reports the first experimental results from the world's first distributed optical klystron FEL, the DOK-1 FEL, at Duke University. The DOK-1 FEL is a hybrid system, comprised of four wigglers: two horizontal and two helical. With the DOK-1 FEL, we have obtained the highest FEL gain among all storage ring based FELs at $47.8$\%($\pm2.7$\%) per pass at 450 nm. We have also realized controlled polarization switches of the FEL beam by a non-optical means through the manipulation of a buncher magnet. DOK FELs are promising light sources capable of rapid polarization switch in UV and VUV. Furthermore, DOK FELs can be used as a multi-color light source with multiple lasing lines and harmonic generation.
Talk Slides: (Slides)
|
User jzadeh - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-23T01:45:16Z http://mathoverflow.net/feeds/user/10632 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/87098/does-the-existence-of-an-asymtpotic-density-imply-the-existence-of-a-measure-on-i Does the existence of an asymtpotic density imply the existence of a measure on infinite dimensional (path) space? jzadeh 2012-01-31T01:36:21Z 2012-01-31T02:05:08Z <p>This question is related to the following question <a href="http://mathoverflow.net/questions/59513/question-about-a-limit-of-gaussian-integrals-and-how-it-relates-to-path-integrati" rel="nofollow">http://mathoverflow.net/questions/59513/question-about-a-limit-of-gaussian-integrals-and-how-it-relates-to-path-integrati</a></p> <p>A couple of authors have observed that composing a random walk an infinite number of times gives an asymptotic time invariant density. The original reference is "Fractional diffusion equations and processes with randomly varying time" Enzo Orsingher, Luisa Beghin <a href="http://arxiv.org/abs/1102.4729" rel="nofollow">http://arxiv.org/abs/1102.4729</a>. Roughly speaking I am curious if this notion of iterating a random walk infinitely often and that fact that this iteration converges to some fixed density imply the existence of an infinite dimensional measure.</p> <p>The line (3.14) of Orsingher and Beghins paper reads for $t > 0$ and $x \in \mathbb{R}$ $$(*) \qquad\lim_{n \rightarrow \infty} 2^{n} \int_{0}^{\infty} \ldots \int_{0}^{\infty} \frac{e^{\frac{-x^2}{2z_1}}}{\sqrt{2 \pi z_1}} \frac{e^{\frac{-{z_1}^2}{2z_2}}}{\sqrt{2 \pi z_2}} \ldots \frac{e^{\frac{-{z_n}^2}{2t}}}{\sqrt{2 \pi t}} \mathrm{d}z_1 \ldots \mathrm{d}z_n = e^{-2 |x|}$$</p> <p>Since (*) is very similar to normalizations carried out in computing the propagator in quantum mechanics or just the formulations of path integrals in general I was curious about how rigorous we could make the following statements. Also the way I have seen these type of constructions carried out is either via the the standard definition of Wiener measure on finite dimensional "cylinder sets" or some application of Bochner-Milnos combined with a normalization of Gaussian measure on $\mathbb{R}^n$. So I am wondering if this is something contained within the construction of wiener measure or other infinite dimensional measures on Banach spaces.</p> <p>1) Does (*) imply the existence of a measure on the space of continuous functions with finite support (paths)?</p> <p>2) If such a measure does exist is it equivalent to Wiener measure?</p> http://mathoverflow.net/questions/59513/question-about-a-limit-of-gaussian-integrals-and-how-it-relates-to-path-integrati Question about a Limit of Gaussian Integrals and how it relates to Path Integration (if at all)? jzadeh 2011-03-25T03:20:53Z 2011-04-05T14:34:50Z <p>I have come across a limit of Gaussian integrals in the literature and am wondering if this is a well known result. </p> <p>The background for this problem comes from the composition of Brownian motion and studying the densities of the composed process. So if we have a two sided Brownian motion $B_1(t)$ we replace t by an independent Brownian motion $B_2(t)$ and study the density of $B_1(B_2(t))$. If we iterate this composition n times we get the iterated integral in (**) below as an expression for the density of the n times iterated Brownian motion. The result I am interested in is derived in the following paper:</p> <p>The original reference is "Fractional diffusion equations and processes with randomly varying time" Enzo Orsingher, Luisa Beghin <a href="http://arxiv.org/abs/1102.4729" rel="nofollow">http://arxiv.org/abs/1102.4729</a></p> <p>Line (3.14) of Orsingher and Beghins paper reads for $t > 0$</p> <p>$$(**) \qquad\lim_{n \rightarrow \infty} 2^{n} \int_{0}^{\infty} \ldots \int_{0}^{\infty} \frac{e^{\frac{-x^2}{2z_1}}}{\sqrt{2 \pi z_1}} \frac{e^{\frac{-{z_1}^2}{2z_2}}}{\sqrt{2 \pi z_2}} \ldots \frac{e^{\frac{-{z_n}^2}{2t}}}{\sqrt{2 \pi t}} \mathrm{d}z_1 \ldots \mathrm{d}z_n = e^{-2 |x|}$$</p> <ol> <li><p>How do you prove this result without using probability? Edit: there has been a solution posted to 1) using saddlepoint approximation but I am still not clear on how to make the argument rigorous <a href="http://physics.stackexchange.com/q/7552/2757" rel="nofollow">http://physics.stackexchange.com/q/7552/2757</a></p></li> <li><p>I have been studying a slight generalization of ** from the probability side of things and have been trying to use dominated convergence to show the LHS of ** is finite but I am having problems finding a dominating function over the interval $[1,\infty)^n$. Is dominated convergence the best way to just show the LHS of (**) is finite?</p></li> <li><p>Is this a type of path integral (functional integral)? Or is this integrand some kind of kinetic plus potential term arsing in quantum mechanics? Do expressions like (**) ever come up in physics literature?</p></li> </ol> <p>(I tried using the change of variable theorem for Wiener measure to transform (**) into a Wiener integral with respect a specific integrand and have had some success with this.. I think this shows how to compute a Wiener integral with respect to a function depending on a path and not just a finite number of variables but did not see how to take this any further - The change of variable theorem for Wiener Measure was taken from "The Feynman Integral and Feynman's Operational Calculus" by G. W. Johnson and M. L. Lapidus.)</p> http://mathoverflow.net/questions/59748/what-structure-is-needed-to-define-a-gaussian-distribution-on-a-given-space/59752#59752 Answer by jzadeh for What structure is needed to define a Gaussian distribution on a given space? jzadeh 2011-03-27T19:08:50Z 2011-03-27T19:26:20Z <p>One general construction can be found in Revuz and Yor "Continuous Martingales and Brownian Motion" for instance:</p> <p>proposition (1.3)</p> <p>Let $H$ be a separable real Hilbert space. There exist a probability space $\left( \Omega, \mathcal{F}, \mathbb{P} \right)$ and a family $X(h)$, $h \in H$ of random variables on the space such that</p> <p>i) the map $h \rightarrow X(h)$ is linear</p> <p>ii) for each $h$ the random variable $X(h)$ is Gaussian centered and $\mathbb{E} [ X(h)^2] = ||h||_{H}^{2}$</p> <p>Nualarts book "Malliavian Calculus" also starts with the notion of isonormal Gaussian process which is general as well as Adler's books on Gaussian processes.</p> <p>Alternatively you could look at one of T. Hida's books on White noise analysis for a construction based on the bochner-milnos theorem and Nuclear spaces.</p> <p>Sorry none of these have a geometric perspective that I am aware of... </p> http://mathoverflow.net/questions/52448/time-integrals-of-diffusion-processes/52542#52542 Answer by jzadeh for Time integrals of diffusion processes jzadeh 2011-01-19T18:54:40Z 2011-01-20T11:38:06Z <p>My Stochastic calculus professor always used to say "When in doubt use Ito"</p> <p>So let $f(t,x) = t x$ and compute $\partial_t f(t,x) = x$, $\partial_x f(t,x) = t$ and $\partial_{xx} f(t,x) = 0$</p> <p>Now the Ito lemma says for $f$ twice differentiable with respect to $x$ and once differentiable with respect to $t$ then the following formula holds for any Ito process:</p> <p>$f(t, X_t) = f(0,X_0) + \int_{0}^{t}\partial_s f(s,X_s) ds + \int_{0}^{t}\partial_x f(s,X_s) dX_s$ $+ \int_{0}^{t}\partial_{xx} f(s,X_s) d \left< X,X \right>_s$</p> <p>So applying the above fact to the function $f(t,x) = tx$ gives:</p> <p>$t X_t = 0 + \int_{0}^{t}X_s ds + \int_{0}^{t} s dX_s + 0$ or to give you a starting answer $\int_{0}^{t}X_s ds = t X_t - \int_{0}^{t} s dX_s$</p> <p>Edit 3: The following only holds now if $X_t$ is a Gaussian process which is not true in general... So in vague words we have that The (Riemann) integral of an ito process is equal to the difference of two Gaussian processes which should again be Gaussian .... (if all this logic is correct it should then suffice to characterize the processes covariance structure in order to have a complete understanding of the law of the process.)</p> <p>For example one can compute the variance if $X_t$ is standard Brownian motion:</p> <p>$\mathbb{E}[(\int_{0}^{t}X_s ds)^2] = t^2 \mathbb{E}[(X_t)^2] -2t \mathbb{E}[X_t \int_{0}^{t} s dX_s] + \mathbb{E}[(\int_{0}^{t}sdX_s)^2]$</p> <p>By the Ito isometry we have $\mathbb{E}[(\int_{0}^{t}sdX_s)^2] = \int_{0}^{t}s^2ds = t^{3}/3$.</p> <p>To compute $\mathbb{E}[X_t \int_{0}^{t} s dX_s]$ notice first that the Ito integral of a deterministic function is always a Gaussian process. EDIT: Shavi has given that $\mathbb{E}[X_t \int_{0}^{t} s dX_s] = t^2/2$</p> <p>$\mathbb{E}[(\int_{0}^{t}X_s ds)^2] = t^3 - t^3 + \frac{t^{3}}{3} = \frac{t^3}{3}$</p> <p>Computing the covariance $\mathbb{E}[\int_{0}^{t}X_s ds \int_{0}^{u}X_s ds]$ involves dealing with terms $\mathbb{E}[\int_{0}^{t}s dX_s \int_{0}^{u}s dX_s]$ and $\mathbb{E}[X_t X_u]$ which are again probably well known in certain cases (the second term is obviously equal to $min(t,u)$ when $X$ is b.m.) but may be difficult to handle in your general case. </p> <p>Edit 2: To give an approach to answer the question "Is it possible that the integrated processes have equivalent laws?"</p> <p>Since $\int_{0}^{t}X_{s}^{(1)}ds$ and $\int_{0}^{t}X_{s}^{(2)}ds$ are Gaussian processes (we proved this using ito) it suffices to check if there covariance functions $g_{1}(t,u)=\mathbb{E}[\int_{0}^{t}X_{s}^{(1)}ds \int_{0}^{u}X_{s}^{(1)}ds]$ and $g_{2}(t,u) = \mathbb{E}[\int_{0}^{t}X_{s}^{(2)}ds \int_{0}^{u}X_{s}^{(2)}ds]$ are equal for all $t,u >0$ to show that the two processes have equivalent laws.</p> <p>Now applying the result we got above from the ito calculation lets us start computing the covariance:</p> <p>$\mathbb{E}[\int_{0}^{t}X_s ds \int_{0}^{u}X_s ds]$ = $\mathbb{E}[( t X_t - \int_{0}^{t} s dX_s)( u X_u - \int_{0}^{u} s dX_s)]$ $= t u \mathbb{E}[ X_t X_u ] - t \mathbb{E}[X_t \int_{0}^{u} s dX_s ] -u \mathbb{E}[X_u \int_{0}^{t} s dX_s ]$ $+\mathbb{E}[\int_{0}^{t}s dX_s \int_{0}^{u}s dX_s]$</p> <p>I refer to my above example on ways to deal with the terms in this expression given certain assumptions on $\mu$ and $\sigma$.Edit 3: Again this is just a way to start and obviously the calculations involving standard Brownian motion are trivial but the point is that the laws of $Y^{(1)}$ and $Y^{(2)}$ are equivalent (as opposed to equal) as soon as you show $g_1(t,u) = g_2(t,u)$ for all $t,u>0$.</p> http://mathoverflow.net/questions/48771/proofs-that-require-fundamentally-new-ways-of-thinking/52395#52395 Answer by jzadeh for Proofs that require fundamentally new ways of thinking jzadeh 2011-01-18T09:50:30Z 2011-01-18T09:50:30Z <p>Malliavin's proof of Hormander's theorem is very interesting in the sense that one of the basic ingredients in the language of the proof is a derivative operator with respect to a Gaussian process acting on a Hilbert space. The adjoint of the derivative operator is known as the divergence operator and with these two definitions one can establish the so called "Malliavin Calculus" which has been used to recover classical probabilistic results as well as give new insight into current research in stochastic processes such as developing a stochastic calculus with respect to fractional Brownian motion. What makes his proof more interesting is that Malliavin was trained in geometry and only used the language of probability in a somewhat marginal sense at times - alot of his ideas are very geometric in nature which can be seen for example in his very dense book: P. Malliavin: Stochastic Analysis. Grundlehren der Mathematischen Wissenschaften, 313. Springer-Verlag, Berlin, 1997.</p> http://mathoverflow.net/questions/50329/generalizations-of-a-product-formula-for-the-gamma-function Generalizations of a product formula for the gamma function jzadeh 2010-12-25T07:44:45Z 2010-12-26T07:59:29Z <p>Hello and Happy holidays. </p> <p>I am interested in generalizations of the following product formula for the gamma function $\Gamma(z)= \int_{0}^{\infty} t^{z-1}e^{-t}dt$ when $n \geq 2$:</p> <p>\begin{align} \displaystyle\prod_{k = 1}^{n} \frac{\Gamma(\frac{z}{2^k}+\frac{1}{2})}{\Gamma(\frac{1}{2})} = & \frac{\Gamma(z+1)}{2^{2z(1-\frac{1}{2^n})} \Gamma(\frac{z}{2^n}+\frac{1}{2})} \end{align}</p> <p>Let $H_1,H_2,...H_n \in (0,1)$ and $z \in \mathbb{R^+}$.<br> 1) Then is it true that the following formula holds for $n \geq 2$?</p> <p>\begin{align} \frac{\Gamma(zH_1 + \frac{1}{2})\Gamma(zH_1H_2 + \frac{1}{2}) \dotsb \Gamma(zH_1H_2 \dotsb H_n + \frac{1}{2})}{\prod_{k=1}^{n} \Gamma(\frac{1}{2})} =<br> \end{align}</p> <p>$\frac{\Gamma(z+1)}{2^{2z(1-H_1H_2 \dotsb H_n)} \Gamma( z H_1 H_2 \dotsb H_n + \frac{1}{2})}$ </p> <p>2) As $n$ tends to $\infty$ is the LHS of the last expression finite?</p> <p>3) Does question 1) hold if $H_1 = 1$?</p> <p>(In the context of my research the $H_i$'s are Hurst parameters from n+1 independent fractional Brownian motions)</p> http://mathoverflow.net/questions/44528/a-simple-decomposition-for-fractional-brownian-motion-with-parameter-h1-2/45156#45156 Answer by jzadeh for A simple decomposition for fractional Brownian motion with parameter $H<1/2$ jzadeh 2010-11-07T10:23:34Z 2010-11-07T10:30:09Z <p>Sorry I don't have time to write a better answer. I would be willing to bet Nualart has thought about this problem at least and his answer could very well be encompassed in this paper: (In particular your problem might be a special case described in section 3)</p> <p>P. Lei and D. Nualart: A decomposition of the bifractional Brownian motion and some applications. Statistics and Probability Letters 79, 619-624, 2009. </p> <p><a href="http://arxiv.org/PS_cache/arxiv/pdf/0803/0803.2227v1.pdf" rel="nofollow">http://arxiv.org/PS_cache/arxiv/pdf/0803/0803.2227v1.pdf</a></p> http://mathoverflow.net/questions/87098/does-the-existence-of-an-asymtpotic-density-imply-the-existence-of-a-measure-on-i Comment by jzadeh jzadeh 2012-02-03T02:56:53Z 2012-02-03T02:56:53Z @AlexanderChervov. Thanks for your ideas but I am still left with the feeling that the RHS of * can be usesd to a come up with a measure that concentrates on something different than Holder continuous paths with modulus 1/2. Furthermore * is an expression for the probability density of Iterating Brownian motion and the density is not Gaussian and its transition probabilities do not satisfy Kolmogorov-Chapman so one is led to believe the induced measure is not a so called "Gaussian Measure". I wonder if * can give some way to study the induced measure of the IBM process itself. http://mathoverflow.net/questions/87098/does-the-existence-of-an-asymtpotic-density-imply-the-existence-of-a-measure-on-i Comment by jzadeh jzadeh 2012-01-31T18:57:17Z 2012-01-31T18:57:17Z @AlexanderChervov Thanks for your comment. I see your point and so to make things a little more clear how about this: Using equation * can we construct a measure on the space of continuous functions? Equation * has generalizations given by considering iterating fractional Brownian motion and so I am curious to see what type of (if any) measures on function spaces are induced by considering iterating certain classes of random walks an infinite number of times. http://mathoverflow.net/questions/59513/question-about-a-limit-of-gaussian-integrals-and-how-it-relates-to-path-integrati/60601#60601 Comment by jzadeh jzadeh 2011-04-05T00:12:52Z 2011-04-05T00:12:52Z Is is clear that $T$ has a fixed point because of your comments that $K$ is Hilbert-Schmidt? http://mathoverflow.net/questions/59513/question-about-a-limit-of-gaussian-integrals-and-how-it-relates-to-path-integrati/60601#60601 Comment by jzadeh jzadeh 2011-04-05T00:11:52Z 2011-04-05T00:11:52Z Thanks for your help. I am am not sure I completely understand the iteration process in terms of $T$. Dont you want to show $\lim_{n\rightarrow \infty } T^{n} \phi = \phi$? http://mathoverflow.net/questions/59748/what-structure-is-needed-to-define-a-gaussian-distribution-on-a-given-space/59752#59752 Comment by jzadeh jzadeh 2011-03-27T20:15:03Z 2011-03-27T20:15:03Z sorry is it appropriate to remove this as answer then? One last thing I can add is the construction of "brownian motion" in the free potability setting where you are working over von neumann algebras of operators as in section 1.1 of <a href="http://www.iecn.u-nancy.fr/~nourdin/4th-moment-Wigner-KNPS.pdf" rel="nofollow">iecn.u-nancy.fr/~nourdin/…</a>. Just in case you were unaware of an example of a type of Brownian motion taking values in a non commutative space. http://mathoverflow.net/questions/59513/question-about-a-limit-of-gaussian-integrals-and-how-it-relates-to-path-integrati/59628#59628 Comment by jzadeh jzadeh 2011-03-26T02:24:19Z 2011-03-26T02:24:19Z Can a moderator remove this if I flag it? http://mathoverflow.net/questions/59513/question-about-a-limit-of-gaussian-integrals-and-how-it-relates-to-path-integrati Comment by jzadeh jzadeh 2011-03-25T21:09:18Z 2011-03-25T21:09:18Z Concerning 2) and and the comment following 3) I have been moving back and forth between analyzing the iterated density of $X_n$ and analyzing the behavior of its moment generating function and these comments really apply to the mgf. http://mathoverflow.net/questions/59513/question-about-a-limit-of-gaussian-integrals-and-how-it-relates-to-path-integrati Comment by jzadeh jzadeh 2011-03-25T06:56:58Z 2011-03-25T06:56:58Z The density of $X_n(t)$ is given by the iterated integral in ** http://mathoverflow.net/questions/59513/question-about-a-limit-of-gaussian-integrals-and-how-it-relates-to-path-integrati Comment by jzadeh jzadeh 2011-03-25T06:56:03Z 2011-03-25T06:56:03Z Yes this is true if you look at this from a probabilistic perspective you can argue by self similarity. So set $X_n(t)=B_n(B_{n-1}(...(B1(t))...))$ where $B_i$ is two-sided Brownian motion. Then the following equalities hold in distribution: $X_n(t)=t^{\frac{1}{2^n}} X_n(1)$ taking limits on both sides we see that the random variable $\lim_{n\rightarrow \infty} X_n(t)$ depends only on $X_n(1)$ (i.e. is time invariant). Other authors have made this more rigorous (there is a proof that the asymptotic density is time invariant based on method of moments) http://mathoverflow.net/questions/52448/time-integrals-of-diffusion-processes/52542#52542 Comment by jzadeh jzadeh 2011-01-20T11:42:46Z 2011-01-20T11:42:46Z And here is my mistake. I was trying to figure out why $Y$ would be Gaussian in general but it is not. My argument breaks down I should have said that $tX_t - \int_{0}^{t}X_sds = \int_{0}^{t}sdX_s$ is a Gaussian process. http://mathoverflow.net/questions/52448/time-integrals-of-diffusion-processes/52542#52542 Comment by jzadeh jzadeh 2011-01-20T11:02:58Z 2011-01-20T11:02:58Z Since the processes are Gaussian they will have equivalent laws (as opposed to equal) for all time if the covariance functions equal. That is if $g_1(t,u) = g_2(t,u)$ for all $t,u >0$ the laws of the processes will be equivalent (as opposed to equal). Do you disagree with this fact Didier? http://mathoverflow.net/questions/52448/time-integrals-of-diffusion-processes/52542#52542 Comment by jzadeh jzadeh 2011-01-20T10:16:19Z 2011-01-20T10:16:19Z Thanks for the downvote The Bridge.... I refer you to my above passage "it should then suffice to characterize the processes covariance structure in order to have a complete understanding of the law of the processes". Since that is obviously to vague I have elaborated a little more in edit 2 and I refer you to one of the excellent texts by Robert Adler for the theorems I am citing on Gaussian processes. # R.J. Adler, (1990), , An Introduction to Continuity, Extrema, and Related Topics for General Gaussian Processes, IMS Lecture Notes-Monograph Series, Vol 12, vii + 160 – jzadeh 0 secs ago http://mathoverflow.net/questions/52448/time-integrals-of-diffusion-processes/52542#52542 Comment by jzadeh jzadeh 2011-01-19T22:48:43Z 2011-01-19T22:48:43Z Thank you for the help I will make the appropriate edits. http://mathoverflow.net/questions/50329/generalizations-of-a-product-formula-for-the-gamma-function Comment by jzadeh jzadeh 2010-12-26T07:30:14Z 2010-12-26T07:30:14Z Yes in the context of exact covering systems here is the reference. Exact Covering Systems and the Gauss-Legendre Multiplication Formula for the Gamma Function John Beebee Proceedings of the American Mathematical Society Vol. 120, No. 4 (Apr., 1994), pp. 1061-1065 (article consists of 5 pages) <a href="http://jbeebee.net/math%20web%20pages/gauss_legendre.pdf" rel="nofollow">jbeebee.net/math%20web%20pages/gauss_legendre.pdf</a> http://mathoverflow.net/questions/50329/generalizations-of-a-product-formula-for-the-gamma-function/50332#50332 Comment by jzadeh jzadeh 2010-12-26T00:06:36Z 2010-12-26T00:06:36Z Thank you very much for your time and nice one line answer. We also have been considering the case $H_1 = 1$. For fixed $z \in \mathbb{R}$ and $H_1 = 1$ is there a way to multiply by an appropriate constant to still make the equality hold?
|
# 83g of ethylene glycol dissolved
Question:
$83 \mathrm{~g}$ of ethylene glycol dissolved in $625 \mathrm{~g}$ of water. The freezing point of the solution is $\mathrm{K} .$ (Nearest integer)
[Use : Molal Freezing point depression constant of water $=1.86 \mathrm{~K} \mathrm{~kg} \mathrm{~mol}^{-1}$ ]
Freezing Point of water $=273 \mathrm{~K}$
Atomic masses : $\mathrm{C}: 12.0 \mathrm{u}, \mathrm{O}: 16.0 \mathrm{u}, \mathrm{H}: 1.0 \mathrm{u}]$
Solution:
$\mathrm{k}_{\mathrm{f}}=1.86 \mathrm{k} . \mathrm{kg} / \mathrm{mol}$
$\mathrm{T}_{\mathrm{f}}^{\mathrm{o}}=273 \mathrm{k}$
solvent : $\mathrm{H}_{2} \mathrm{O}(625 \mathrm{~g})$
$\Rightarrow \Delta \mathrm{T}_{\mathrm{f}}=\mathrm{k}_{\mathrm{f}} \times \mathrm{m}$
$\Rightarrow\left(\mathrm{T}_{\mathrm{f}}^{\mathrm{o}}-\mathrm{T}_{\mathrm{f}}^{\mathrm{l}}\right)=1.86 \times \frac{83 / 62}{624 / 1000}$
$\Rightarrow 273-\mathrm{T}_{\mathrm{f}}^{1}=\frac{1.86 \times 83 \times 1000}{62 \times 625}=\frac{154380}{38750}$
$\Rightarrow 273-\mathrm{T}_{\mathrm{f}}^{1}=4$
$\Rightarrow \mathrm{T}_{\mathrm{f}}^{1}=259 \mathrm{~K}$
|
# Physics of an infrared thermometer
The thing about infrared thermometers that bugs me is how can you get the same temperature reading regardless of the distance to the object. Shouldn't there be a difference when measuring from two different standing points since energy flux density decreases with $${1\over distance^2}$$ and infrared thermometers work by focusing IR light on a thermopile, which then results in decreased (when measuring from further away) absorbed energy and therefore lower temperature and finally lower voltage across thermopile. Is there something I am getting wrong about this, or do IR thermometers make use of some other physics law like Wien's displacement law, by somehow measuring $$\lambda_{peak}$$ to determine the temperature?
I believe the basic answer is that, within limits, as you move away from an extended source, the IR sensor can collect flux from a greater amount of the surface. Your $$\frac{1}{distance^2}$$ formula only holds for a point source.
|
Next: Fine distribution of primes Up: Multiplicative Number Theory and Previous: Primality testing and factorization
## Distribution of primes
If we continue to enumerate the prime positive integers, we find that although they occur less and less frequently there still seems to be an unending supply of them. Later, we shall prove that there are in fact infinitely many prime numbers. This theorem first appeared in Euclid's Elements. After establishing that fact, there still remains the general question of how the primes are distributed among the integers. There can be long stretches without primes, and also sudden bunches of primes. This gives their distribution an appearance of randomness. After much calculation, Gauss (at age fifteen) conjectured the approximate size of the function defined to be the number of primes less than or equal to X. First established almost a century later in the 1890's, Gauss' conjecture can be stated as the following limit:
This says that for very large values of X the number is very close to . Here, is the natural logarithm (i.e. to the base e). This limit is known today as the prime number theorem.''
Actually, Gauss discovered a function much closer to the value of the prime number counting function . This is the logarithmic integral
where means the natural logarithm. Riemann conjectured that the difference is just a shade above . This is an equivalent form of Riemann's Hypothesis:
for any . This is largely agreed to be the most important unsolved problem in number theory.
Next: Fine distribution of primes Up: Multiplicative Number Theory and Previous: Primality testing and factorization
David J. Wright
2000-08-24
|
# On the consistency of different well-polished astronomy software
We are searching data for your request:
Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.
I have purchased a custom wedding band from a seller that claims the ring will show the constellations visible at the horizon on a specific date at a certain date and time.
However, I have fired up Stellarium and set it up to look at a specific example shown in the product page - and I cannot find any set of parameters which looks like it: Jupiter's not in the right place, some constellations are under the horizon (Virgo, Libra, Leo), some others very high in the sky (Andromeda, Perseus)… Although it's the first time I use Stellarium, I don't see how I could make a mistake:
• Location set to London Bridge (even the preset London to be sure)
• Date and time set just like in the example (double-checked 5 times)
• Cylindrical projection (tried all of them though), offset -30%
• FoV 210° (to have the 360° mapped to the entire screen - I have
absolutely no idea why 180° does not work)
• Elevation lines horizontal (although I tried moving around to align everything like in the pictures for every projection method - in vain)
Here is the example:
Am I missing something, or is their sky map wrong? If yes, is it possible that two pieces of well-polished software like Stellarium and this one (looks just as good as Stellarium) have different star locations? How come (isn't there a standard star almanach?)?
If you set the date to 2018-04-06, Stellarium shows the Moon and planets in positions matching the example image. Any good planetarium software should produce a similar result.
Most likely the vendor cut and pasted two screenshots (note the seam) for April 2018, overlaid "January 1973," and hoped customers would not check. Perhaps you could ask them to send you an image for approval before printing the ring.
Stellarium is known to be accurate. Star and planet positions in 1973 are very well known, and correctly shown in Stellarium.
At that time and place, Cetus and Taurus are rising in the East and North East, where Saturn is just about to rise. Lynx is on the horizon in the North. The Horizon. Bootes is about to set in the West. The sun in low on the horizon in Saggitarius in the SW. Microscopum and Sculptor are in the S and SE respectively. Jupiter is very close to the sun in Saggitarus. The moon is close to new, also in Saggitarus. Venus is also nearby in Ophiuchus. Of course the sun is up and so the stars would not have been visible at this time.
Astrobites is a daily astrophysical literature journal written by graduate students in astronomy since 2010. Our goal is to present one interesting paper per day in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.
### Why read Astrobites?
Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.
Our goal is to solve this problem, one paper at a time. In 5 minutes a day reading Astrobites, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in a new area of astronomy.
If you’re new to Astrobites and aren’t sure where to start reading, check out a few selected posts from our first four years.
### What is astro-ph?
astro-ph is the astrophysics section of arXiv.org, where researchers post their latest work (often before official review and publication). We always link back to the original arXiv post, where you can download the original article (for free). Occasionally, we will take on special topics such as results presented at conferences, tips for applying to graduate school, or tutorials for specific research tools.
### Who Writes Astrobites?
Astrobites is written by a team of graduate students at universities around the world. We bring a diverse set of research interests and backgrounds to our writing. Please visit the Meet the Authors to learn more about each author.
### Astrobites Committees
In addition to writing daily posts, our authors are encouraged to serve on Astrobites committees. Those committees include:
Administrative Committee: Collectively responsible for ensuring that Astrobites committees, working groups and chairs are upholding their designated responsibilities Act as intermediaries and advisors to members of the organization Promote expansion of the collaboration’s work by e.g. promoting its use in classrooms, expanding its readership, or starting up extensions of the site for different media or target audiences. Committee members: Jenny Calahan, Mia de los Reyes, Gourav Khullar, Amber Hornsby, Haley Wahl, Michael Hammer, Kate Storey-Fisher
Scheduling Committee: Responsible for ensuring that postings are consistent/timely. Chair will organize the schedule, but committee will share responsibility for ‘emergency’ postings that arise. Committee members: Jenny Calahan, Haley Wahl, Mitchell Cavanagh
Moderating Committee: In charge of responding to comments on various social media platforms to increase reader engagement. Committee members: Huei Sears, Ellis Avallone, Joanna Ramasawmy, Sam Factor, Vatsal Panwar
Policy Committee: The Policy Committee works with the AAS Bahcall Public Policy Fellow (currently Ashlee Wilkins) to schedule posts about science policy and how it intersects with astronomy. Committee members: Kaitlyn Shin, Lukas Zalesky, Mike Foley, Tarini Konchandy, Ali Crisp, Huei Sears
Education Committee: This committee focuses on building Astrobites as a pedagogical tool across the astronomical community. The work here emphasizes on educational activities during AAS meetings, and education research work to map the efficacy of Astrobites in a typical classroom. Committee members: Ali Crisp, Mike Foley, Jason Hinkle, Jamie Sullivan, Briley Lewis, Will Saunders, Nora Shipp, Michael Hammer, Vatsal Panwar
Editorial Committee: Ensuring editorial consistency by maintaining the Astrobites style guide, operating periodic editing workshops, and helping the scheduler assign editors. Committee members: Jamie Sullivan, Emma Foxell, Tarini Konchandy, Haley Wahl, Ellis Avallone, Jason Hinkle, Ishan Mishra Contact: [email protected]
Hiring/Application Committee: Responsible for the annual application process. Specific responsibilities include advertising the call for applications, responding to inquiries, guiding the committee through reading and ranking the applications, and helping new authors get set up. The time responsibilities for leading this committee fall mostly within a 3 month period surrounding the application deadline, which is typically in September or October. Committee chairs: Haley Wahl, Tomer Yavetz Contact: [email protected]
Diversity, Equity and Inclusion Committee: The Diversity, Equity, and Inclusion committee aims to address issues of representation, marginalization, and community through education (acknowledging minoritized population issues in physics and astronomy, outreach to budding astronomical communities), advocacy (supporting diversity and inclusion initiatives in academic spaces) and efforts to make the Astrobites collaboration inclusive. Committee members: Kate Storey-Fisher, Joanna Ramasawmy, Nora Shipp, Ellis Avallone, Kaitlyn Shin, Jamie Sullivan, Mia de los Reyes, Huei Sears, Luna Zagorac
Public Relations / Advertising Committee: Dedicated to advertising Astrobites to new audiences. Committee: Haley Wahl, Michael Hammer, Wei Yan, Ashley Piccone, Lukas Zalesky
AAS Chair (in charge of organizing our AAS/EWASS conference materials and presentations): Gourav Khullar
SciBites Chair (Liaison for SciBites network): Briley Lewis
Social Media Chairs (in charge of maintaining our Facebook and Twitter presence, and growing readership of the site through these tools): Michael Hammer, Huei Sears
Website Chair (in charge of maintaining our website): Sam Factor
Slack Chair (explores Slack as a possible means of communication): Ali Crisp
Astrotweeps Chair (runs the Astrotweeps Twitter account): Haley Wahl
Undergraduate Czars (Reach out to undergraduate readers and contributors): Jason Hinkle, Will Saunders
### Statement of Inclusivity
Scientists are members of a broad human community, and may thus experience societal prejudices that directly affect their ability to contribute to the scientific endeavor. We at Astrobites support and encourage universal participation in science, regardless of minoritized status. We affirmatively declare our support for a scientific community open to — and providing support and safety for — every individual, regardless of race, ethnicity, nationality, sex, sexual orientation, gender identity, gender expression, or medical condition. Such support includes ensuring that universities, laboratories, and professional societies do not tolerate any form of harassment, and have transparent procedures for addressing such harassment when it occurs. We reject racism, sexism, homophobia, transphobia, ableism , and prejudice stemming from religion or citizenship. Eliminating these injustices is the only way to ensure that all people can benefit from participation in the science, and we accept this task as integral to the pursuit of science and scientific outreach.
Follow us on twitter @astrobites, like astrobites on Facebook, or send us an email on [email protected]!
As an independent graduate student organization, since 2016 Astrobites has been hosted and supported by the American Astronomical Society.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of any institutions the author(s) may be associated with. Astrobites is not affiliated with the arXiv.
## Abbreviations¶
Place abbreviations such as i.e. and e.g. within parentheses, where they are followed by a comma. Alternatively, consider using “that is” and “for example” instead, preceded by an em dash or semicolon and followed by a comma, or contained within em dashes.
### Examples¶
The only way to modify the data in a frame is by using the data attribute directly and not the aliases for components on the frame (i.e., the following will not work).
There are no plans to support more complex evolution (e.g., non-inertial frames or more complex evolution), as that is out of scope for the astropy core.
Once you have a coordinate object you can access the components of that coordinate — for example, RA or Dec — to get string representations of the full coordinate.
For general use and scientific terms, use abbreviations only when the abbreviated term is well-known and widely used within the astronomy community. For less common scientific terms, or terms specific to a given field, write out the term or link to a resource of explanation. A good rule of thumb to follow when deciding whether or not something should be abbreviated is: when in doubt, write it out.
### Examples¶
1D, 2D, etc. is preferred over one-dimensional, two-dimensional, etc.
Units such as SI and CGS can be abbreviated as is more commonly seen in the scientific community.
White dwarf should be written out fully instead of abbreviated as WD.
Names of organizations or other proper nouns that employ acronyms should be written as their known acronym, but with a hyperlink to a website or resource for reference, for instance, CODATA.
## Purchase regrets?
Neaptide, I'll encourage you, not on a particular mount, but to put your money in a mount of quality that you can grow with. It really is not possible in this hobby to over mount, but very easy to under mount your OTA. The mount is every bit as critical to happy observing as the OTA. I believe you made a wise decision given your impressions of the mount, for you.
### #78 tchandler
Not really - although a SW 12" collapsable DOB came close. Over the past several years I have been able to observe with many fine telescopes and witness first hand some truly exceptional phenomena that are not even within the realm of perception of most other people. How could I regret that?
### #79 infamousnation
20mm gso plossl, it slipped by qc, had bad bad astigmatism too close to the center of view
celestron zoom, im not even sure how this is a thing
But if i could go back in time i guess i would buy a 4 inch tak in highschool.
Edited by infamousnation, 08 June 2017 - 01:31 PM.
### #80 skybsd
Hello everybody,
Gonna steal this one straight from our friends at Astronomy Forums but I wanted to know, given your experience, what equipment would you not have purchased if you could do it all over again? Can be anything from a Scope to just equipment. This can help to weed out stuff that's not needed so other members don't make the same mistakes. For me, I would say my original scope, a Meade 114mm EQ which was terrible and broke on me. Due to that I have not to this day bought from Meade again and I don't think I ever will. What about you guys?
Good to hear from you - nice post.
Taking your question as I read it - I'd say that for ME, its the first generation Baader Hyperion Click-Lock Zoom Eyepiece.
That thing (I simply refuse to call it an eyepiece!) was ABSOLUTELY AWFUL! Where do I start?
The simple act of unwrapping it revealed leaked black lubricant all over the thing - and my hands! Oh - and the smell - WOW!
I cleaned it best I could and I left it out to air hoping that the smell would dissipate, so I could it for a spin that night - Boy was I wrong.
The lubricant had continued to secrete out and it still stunk. I did what I could to tidy it up and tried to use it - Don't know what was worse, the smell, the inability to focus or that after a few minutes my fingers were covered in that muck again.
I gave up - walked into the house and tossed it in the garbage - got myself cleaned up and called it a night.
That was the moment I decided to NEVER buy another Baader eyepiece - EVER IN MY LIFE!
### #81 Oriondk
Buying a house with very, very limited view of the sky.
### #82 Gary Z
Purchasing my first scope, the Meade ETX 80. This purchase was made after researching considerably and it received many favoriable reviews. During the one year that this was under warranty, it was replaced times. Meade never gave me any issues with each replacement. Either the optics were bad or the motors were bad. However, I did get some enjoyment out of it. I still have the last one I received. I later purchased the Celestron 8 SE and from the very first time out, I was amazed how well it worked in comparison. Granted the price difference is considerable, but I had purchased the Meade as it was light weight for my back. When I got the Celestron SE Mount, I was amazed as to how portable it is. Seeing it in use at a star party sold me on it. I've been quite happy with Celestron and even purchased a used EVO 8 mount last year.
### #83 Umasscrew39
As time goes on, I am realizing my list of purchase regrets is growing .
- several filters (way over-rated for the value they provide)
- off-axis guider (never used it never needed it)
- other things I just can't remember off hand
Edited by Umasscrew39, 03 July 2017 - 10:12 AM.
### #84 REDSHIFT39
Skywatcher 150mm Mak-Cass,I always had thermal issues with it in my climate.It took longer to cool down than my 10 inch Dobsonian.It also had a dimmer image than my current 150mm reflector.I also regret buying all those cheap eyepieces,I should have not been so cheap and just buy the televue's from the start.
### #85 ForgottenMObject
Buying a house with very, very limited view of the sky.
I know that feeling on some level. While I only rent an apartment, the trees have overgrown everything over the years, and LED insecurity lights installed last year give the place the charm of a prison complex now. My sky is basically gone. The other half of it is although the place is old, I haven't had much of any real problems in the 15+ years I've been here, so I'd be hesitate to give that up and risk moving to some shady dump for a better sky. I *could* buy a small house, but prices are sky-high here in Maryland for what usually amounts to poorly maintained junk homes, and job security is non-existent anymore these days - I've already been through the "fun" of being out of work twice - so I hesitate to commit to any big purchases in a era where we're all seen as disposable.
So, it leaves me trapped in a loop: stay here without a decent night sky or risk other problems just to get the night sky back. risk buying a house with a night sky view, but then hoping that job security exists. There's no good answer, and it's all kind of dumb in a way. No wonder it's hard to get people into this hobby: on top of time, money, and skill in it, you need clear, dark skies.
Edited by ForgottenMObject, 04 July 2017 - 11:41 AM.
### #86 izar187
For me, bottom end priced wide field ep's.
Too few elements, or poorer finished glass, or lousy coatings. or some combination of.
Lots of time wasted too, comparing the views thru bottom end wide fielder's. Time just lost.
Between the bottom end, and the multi hundred dollar top tier, there are tons of well working ep's.
If on a budget, as the vast majority of us on the planet are.
Then simply save up for as many weeks longer as needed. However many weeks that may be.
Get to a telescope field and SEE what works in your focal ratio scope.
Along the way you may discover as I did, that Tele Vue makes some of the finest ox there are.
But there is no target in the heavens that can only be see in theirs.
But there are targets, right there in the fov, that were missed in bottom end wide field.
Trying to make a scope fit a car I owned. Lots more time wasted.
Get a good used vehicle that fits the scope you enjoy. Always.
### #87 Oriondk
Buying a house with very, very limited view of the sky.
I know that feeling on some level. While I only rent an apartment, the trees have overgrown everything over the years, and LED insecurity lights installed last year give the place the charm of a prison complex now. My sky is basically gone. The other half of it is although the place is old, I haven't had much of any real problems in the 15+ years I've been here, so I'd be hesitate to give that up and risk moving to some shady dump for a better sky. I *could* buy a small house, but prices are sky-high here in Maryland for what usually amounts to poorly maintained junk homes, and job security is non-existent anymore these days - I've already been through the "fun" of being out of work twice - so I hesitate to commit to any big purchases in a era where we're all seen as disposable.
So, it leaves me trapped in a loop: stay here without a decent night sky or risk other problems just to get the night sky back. risk buying a house with a night sky view, but then hoping that job security exists. There's no good answer, and it's all kind of dumb in a way. No wonder it's hard to get people into this hobby: on top of time, money, and skill in it, you need clear, dark skies.
I know what you mean about prices. If I wasn't married I'd find a small piece of land in the country and have a 400 sq. ft. Tiny house built, lol.
## Meade ETX-90AT UHTC
• topic starter
Ok this is my first post. I am in the process of trying to figure out a good telescope for my first ever purchase. I think I have decided on the Meade ETX-90AT UHTC from Vanns.
However if I can't get this one I am looking at the Meade ETX-70AT.
Basically I was hoping if you all could fill me in on the scope as to whether or not these to scopes will meet the needs of seeing nice, clear, colorful, shots of planets and deep space objects.
Like I said I am just getting started into astronomy and I want to by a telescope that will fit me for a life time of enjoyment.
Thanks for your help,
Bill
### #2 Guest_**DONOTDELETE**_*
• topic starter
I have an ETX-90RA (manual slow motion controls and no goto) and have been happy with it. An UHTC scope would be even better. However, the ETX-70 is a short achromatic refractor which will have quite a bit of chromatic aberation (violet fringes) around bright objects.
The one annoying thing about the ETX 90 is the poor finderscope. It is a straight through finder which is impossible to see through when aimed near the zenith.
I personally do not like the idea of goto, but other people may like it. I would rather find objects myself than have my telescope find them for me. (just my humble opinion)
The ETX-90 is a good choice for a beginner scope.
Welcome to Cloudy Nights!
### #3 Guest_**DONOTDELETE**_*
• topic starter
Thanks Ian,
Due to budget constraints I am only going to be able to purchase the ETX-70AT. I plan on purchasing a 2X Barlow lens and a wide view eyepiece as well.
I hope that it will be a good scope to learn on and give me a feel for the greater potential to becoming more than just a novice.
### #4 Guest_**DONOTDELETE**_*
• topic starter
### #5 Guest_**DONOTDELETE**_*
• topic starter
Or you could go with the non-UHTC version. I have it and it is not bad.
### #10 Guest_**DONOTDELETE**_*
• topic starter
Topcat,
I am confused then. I saw the Konus MotorMax 90 and it's magnification isn't any better than the Meade ETX-70AT. What I would really like is for someone to come out and say that Meade sucks you should buy another brand and here are the reasons why. Yet every review that I have read about the ETX-70AT says that the optics are above average for it's size and that it is wonderfully good scope for both beginners and intermediates. Plus it works very well as a spotting scope. Which is another appealing feature to me since I like to watch birds.
Since I am a newbie can someone break it down into simple sentences for me?
### #11 amirab
Well. It's certainly not about magnification but LIGHT GATHERING .
get the large aperture you can for your money (considering the optics is good)
### #12 Guest_**DONOTDELETE**_*
• topic starter
### #14 Guest_**DONOTDELETE**_*
• topic starter
Ok so how are the optics on the Konus MotorMax 90? I can't find any reviews on the scope and I am always apprehensive about buying something that no one comments on.
Unfortunately I have already purchased the ETX70-AT, but Vanns has a great return policy so if it doesn't fit the bill I will consider returning it and buying the Konus MotorMax 90 if I can get some hardened proof that it is a good scope. Although I have 2 year old son and I hope some day he will be interested in searching the skies with Dad.
So I may just hold onto the Meade pending it delivers as promised and when he is old enough spring for a larger scope so that both he and I can search using our on scopes together and compare.
Thoughts? By the way thanks for all of your inputs. I really appreciate it. I am finding out that this is another expensive hobby. My wife thinks I can't have a hobby unless it cost a small fortune. She maybe on to something.
### #16 Guest_**DONOTDELETE**_*
• topic starter
Thanks Topcat.
I was just rereading the reviews located on cloudy nights about the ETX60-AT & ETX70. They both make good compelling sources to establish a basis for buying the two scopes. I was a little disappointed with the review of the Orion StarMax 90, but it did perform well overall. I just want to make sure that I am getting the best bang for the buck since I am on a limited budget.
I bought the ETX-70AT from Vanns.com and I got the Meade 882 tripod, Meade 773 hard case, MA9 mm & MA25 mm lens, #494 Autostar Controller Included (cost $200's retail) all for$256.00 which included free shipping.
I haven't been able to find a better deal than that. Also I picked up a Meade 3X Barlow lens & 45 degree Prism from Amazon.com for another $100's. So my total package was just over$356's.
Call me nuts, but it seems to be the best deal going out there right now. Since I haven't the first clue about astronomy, the GO TO feature made the deal even more sweater to me, but if you all are confident that I can learn star locations rather quickly I may have to reconsider after I hear what you have to say about the Konus MotorMax 90 of course.
Thanks once again, The newbie,
Bill
### #17 Guest_**DONOTDELETE**_*
• topic starter
### #18 Guest_**DONOTDELETE**_*
• topic starter
I plan on taking an Astronomy class next spring. I am finishing up with a BS degree in Computer Information Systems at High Point University. A local college. I have to take one science course. Either Biology of Astronomy. Astronomy has always been something I have been interested in. So I am excited to finally own a telescope. I have seen some really nice pictures taken from members on this forum with there scopes and maybe someday after I am well polished I will try some astrophotography my self.
## Using a computer / phone device on the field
But the point remains: PalmOS is very much an out-of-date OS.
I imagine you could say the same thing for XP or 2000.
But in any event, I find that Planetarium is still a very effective program and it seems to offer some features that are lacking in Android programs and in the iPhone apps I have tried.
Hopefully some day there will be a serious app for the Android OS but I am not holding my breath.
### #27 arpruss
Vendor: Omega Centauri Software
### #28 Jon Isaacs
Tell me about the Planetarium features that are important to you but lacking on iPhone and Android apps. It might inspire a developer (maybe even me). :-)
I am not interested in the iPhone and there are plenty of people working on iPhone apps and there are some pretty good ones out there. For the Android, the list is long, from what I see, there are no "serious" astronomy apps out there. Most phone apps seems to focus on the ability of the phone to use it's sensors to point the phone in the direction of the object, I don't care about this though obviously a developer would want to include it.
So, what does a good Android app look like:
To me it looks like Cartes du Ciel or The Sky on a handheld device. I use the Palm for about everything when I am observing, Planetarium does it all. If I want to know the phase of Venus on June 24th, 2017, it takes a few seconds.
- Large databases that are easily searched and filtered, including double stars. No typing for searching, I want a menu structure that allows me to "click with the Stylus" and quickly navigate.
- Easily customized screen settings, adjust labeling as desired. Because of the difficulty of selecting an individual object accurately with a finger, current programs seem to label everything, slowing everything down and cluttering up the screen so that it is unusable.
The ability to size the screen to an exact dimension, none of this pinching to zoom, if I want a 5degree field of view, I want a 5 degree FoV.
- Good information pages including rise and set times for a week. Time menu easily accessed so that looking at a month or a year in the future is easily done.
- The ability to show the distance between two successively selected objects, say how far is it from M6 to M7 (+03°48'42.9")
- No requirement to hook up to the web or 3G network, some apps seem to need this, the 3G access out there in the Valley of the Gods Utah is pretty weak.
I guess that is enough to start with.
### #29 psonice
Jon, I can do almost all of that with starmap pro:
- Decent catalogue (not sure of the details, but I seem to remember stars go to mag 16), easy to browse through (including doubles, yes, and easy to do with a finger or a stylus).
- You can customise the menu (not sure about names, it's clear enough anyway).
- Labels can be turned on and off for different objects.
- There are zoom buttons for when you don't want to pinch and zoom.
- There's no way I've seen to select a FOV, but selecting a FOV seems somewhat antiquated anyway. You just tell it what equipment you have, and switch between different scopes and EPs, it matches the FOV for you, including for CCDs if you tell it the sensor size.
- Not sure about rise + set times for the week, but there's an excellent 'tonight' screen that shows all the stuff visible in your equipment with rise + set times. There's also rise + set for every object, a graph of it's height in the sky over 24 hours, and a graph of altitude at midnight over a complete year.
- Distance between objects, maybe not. Never had a reason to look.
- No data connection required, except for photos (it can pull down lots of user-submitted photos for the objects, and if you submit your own it'll show yours within the actual sky view.
### #30 Astraforce Paul
Hey, folks, no one is arguing that an vintage Palm is equivalent to a modern iPhone or iPod Touch. Of course, it isn't. It can't surf the web, do wi-fi, make calls, automatically download podcasts, do Cover Flow, put the coffee on , etc. And no one is arguing that Palm OS 3 or 4 is a modern operating system! The OP's question had to do with options for using a device in the field at night without affecting one's vision, and the SONY Clie is absolutely a viable option for that.
Even though dated, the Palms and SONY Clies are still wonderful choices for astro observing--big time! Jon is 100% right about that. Anyone who says otherwise has been drinking too much hooch! For many uses, particularly star hopping, Planetarium is superior to any i-device app.
### #31 Astraforce Paul
Planetarium Features
arpruss, you asked about Planetarium's features. Here's my top 10 list--and you'll find few of them in any of the i-device astro apps. I rely on them in almost every observing session.
1. The ability to set differing star magnitudes for different fields of view, so that your screen matches what's in the eyepiece--and what you see in the sky. This is invaluable for star hopping. Most of the i-apps give you control over stellar mags for one fov, but then the app takes over and messes things up gloriously (far too faint a limit--or too bright) for other fovs. It's great knowing that with Planetarium, the stars in the wide horizon, constellation, finder, and eyepiece fields of view will always match what you see.
2. Tap on a menu and select an exact field of view--and get it automatically. Pinch-zooming is all well and good, but it gets tiresome and is terribly inefficient and imprecise. (And doesn't work that well in cold climates with gloves on! )
3. Tap on two objects consecutively and their angular separation shows up. Useful for all sorts of reasons--star hopping, etc. Just used it this morning for the Venus-Jupiter-Mercury-Mars conjunction.
4. Turning particular object catalogs on and off. Many advantages, but a big one is that it lets you show the objects you are interested in--e.g., all the Messiers and 100 Best NGC (or Herschel 400 or whatever).
5. #4 gives the observer control over the showing of DSOs in a large, horizon or constellation-sized fov as well as a small fov. The current crop of astro apps set this automatically and either ends up showing you too many DSOs or too few (Astromist may be an exception it's one of the few I haven't tried).
Even better would be to combine catalog on/off options with controls for DSO magnitudes. Most i-device astro apps lack controls for DSO catalogs & magnitudes--or you can set the magnitude universally or for only one fov. That doesn't work because ideally you'd want to show open star clusters to, say, mag 7, globular clusters to mag 9, galaxies to mag 11, nebula to mag X, etc. and not have the app override all that and show clusters and galaxies to mag 17 when looking at a small fov. Users should have control.
6. 10 minute and custom time steps. Only being given a choice of a minute or an hour, as some leading apps do, doesn't cut it. 10 minutes, 1/2 hour, or whatever the user wants, work better.
7. A single tap to turn constellation lines on and off.
8. One tap access to changing the orientation of the fov.
9. Two tap access to Jupiter's moons.
10. Additional, uploadable, even user-created, catalogs of objects.
This Cloudy Nights thread has more details of how Planetarium does its magic.
### #32 arpruss
Vendor: Omega Centauri Software
arpruss, you asked about Planetarium's features. Here's my top 10 list--and you'll find few of them in any of the i-device astro apps. I rely on them in almost every observing session.
1. The ability to set differing star magnitudes for different fields of view, so that your screen matches what's in the eyepiece--and what you see in the sky. This is invaluable for star hopping. Most of the i-apps give you control over stellar mags for one fov, but then the app takes over and messes things up gloriously (far too faint a limit--or too bright) for other fovs. It's great knowing that with Planetarium, the stars in the wide horizon, constellation, finder, and eyepiece fields of view will always match what you see.
By the way, 2sky for PalmOS also does this, too (and it's free now).
One limitation of this model is that one may want different magnitudes for different scopes and different finders and even different observing locations, and to handle that one would need profiles, and that gets into a mess. Another limitation is that for star-hopping one may sometimes want to zoom out (say, to whatever magnification the finder view has) while keeping the same magnitude limit as in the eyepiece view.
Wouldn't it be better just to have the software keep track of apertures (and maybe other details like sky conditions), so you can quickly switch between, say, 7mm (naked eye), 68mm (finder) and 333mm (scope) views, and then have the software calculate with a good model what you should be able to see at each zoom level with that aperture, with some global user adjustment?
You could also have a default zoom level for each aperture.
2. Tap on a menu and select an exact field of view--and get it automatically.
I can see how that would be handy. (On the other hand, in AstroInfo, zoom is always by a factor of two, and I find that that is a nice balance between getting the FOV I want, and being able to accurately move between FOVs with a few presses of the up/down keys.)
3. Tap on two objects consecutively and their angular separation shows up. Useful for all sorts of reasons--star hopping, etc. Just used it this morning for the Venus-Jupiter-Mercury-Mars conjunction.
Yeah, that's a nice feature. One thing that I assume good PalmOS developers got right is minimizing the number of taps for a common task.
In 2sky, you can do it but it needs three taps to get to the distance (you need to close info screen for the first object, which screen you got when you tapped it). I may add this feature to AstroInfo--seems handy. (AstroInfo has a measure mode where you can draw lines between points on the screen and it measures them. That's not so accurate.)
10. Additional, uploadable, even user-created, catalogs of objects.
## Styjun
When using the Proficiency Dice optional rule, how should they be used in determining a character's Spell Save DC?
What percentage of campground outlets are GFCI or RCD protected?
Can attackers change the public key of certificate during the SSL handshake
Can I enter a rental property without giving notice if I'm afraid a tenant may be hurt?
Can the Cauchy product of divergent series with itself be convergent?
Can a Hogwarts student refuse the Sorting Hat's decision?
Does a humanoid possessed by a ghost register as undead to a paladin's Divine Sense?
On the consistency of different well-polished astronomy software
How do I show and not tell a backstory?
Need reasons why a satellite network would not work
Why does capacitance not depend on the material of the plates?
How do people drown while wearing a life jacket?
Why should I "believe in" weak solutions to PDEs?
How to get Logging using oidc-client with Angular
oidc-client-js and basic authenticationoidc-client js failing in Safari / FFSilent refresh not working with OIDC-client in Angular 5oidc-client-js is not getting claims correctly from Identity Server 4How to determine identity token expiration in oidc-clientAngular2 oidc-client not clear the cookies for mvc application when i logoff from angular2 appoidc-client authentication failure: sub from user info endpoint does not match sub in access_tokenIdentity Server 4 - Check iframe session issue - oidc clientoidc-client-js re-authentication for sensitive dataUsing oidc-client-js in chrome extension
I am using oidc-client with angular 7, and I want to enable logging. The doc suggests that I can do the following
I have not been able to make this work as Oidc does not appear to be on the window object??
Did you find the answer? I'm facing the same issue.
I am using oidc-client with angular 7, and I want to enable logging. The doc suggests that I can do the following
I have not been able to make this work as Oidc does not appear to be on the window object??
Did you find the answer? I'm facing the same issue.
I am using oidc-client with angular 7, and I want to enable logging. The doc suggests that I can do the following
I have not been able to make this work as Oidc does not appear to be on the window object??
I am using oidc-client with angular 7, and I want to enable logging. The doc suggests that I can do the following
I have not been able to make this work as Oidc does not appear to be on the window object??
## Projects under the Observatory Control System
The Observatory Control System software provides for request and observation management, observation scheduling, and a science archive. An observatory that adopts this software has the option to use all of the parts, or only a subset of them. Specifically, the projects that make up the software are:
### Observation Portal
This Django application is the main interface that astronomers interact with to submit observation requests and to monitor the status of those requests. It also stores the observing schedule that is generated from all observation requests by the scheduler. It is fully backed by APIs and includes modules for the following:
Proposal management Calls for proposals, proposal creation, and time allocation Request management Observation request validation, submission, and cancellation, and views providing ancillary information about them Observation management Store and provide the telescope schedule, update observations, and update observation requests on observation update User identity management Provides Oauth2 authenticated user management that can be used in other applications
### Configuration Database
This Django application stores observatory configuration in a database and provides an API to get that configuration, which is needed by the observation portal to perform automatic validation and to calculate estimated request durations. It includes details on the configuration of sites, enclosures, telescopes, instruments, and cameras. The camera configuration has customizable sets of modes and optical path elements to support a wide range of current and future instrument configurations. The configuration is also used by the scheduler to determine available telescopes.
### Downtime Database
This Django application stores periods of scheduled telescope downtime in a database and provides an API to retrieve those periods of downtime. Scheduled downtimes occur for a variety of reasons including maintenance and education use. Downtimes are used in the validation of requests in the observation portal and are also used by the scheduler to block out time that is not available.
### Scheduler
This Python application creates telescope observing schedules. It gets a set of observation requests from the observation portal, computes a schedule, and then saves a set of scheduled observations back to the observation portal. It currently uses the GUROBI solver to solve for the schedule but will provide the option to use an open source solver instead.
### Rise-Set Library
This Python library wraps the FORTRAN library SLALIB. It performs visibility calculations for requested targets in both the observation portal and the scheduler. It supports sidereal and non-sidereal target types and includes airmass, moon distance, and zenith constraints on visibility.
### Science Archive
This Django application provides an API to save and retrieve science data. Certain metadata are stored in a database for easy querying and full image data are stored in AWS s3 for download.
### Ingester Library
This Python library aids in uploading data to the science archive.
## Preparing High Quality, Accessible Figures
We recommend that authors familiarize themselves with best practices for the creation and accessibility of scientific visualizations. Resources, including “Ten Simple Rules for Better Figures,” should be consulted to improve the impact and readability of your figures. Specifically, we call out Rougier et al.’s Rule 5: Do Not Trust the Defaults, and strongly encourage you to check the color defaults used by your preferred visualization software. Tools such as Color Oracle should be utilized to check your figures for accessibility before submission correcting your figures may be achieved by adopting color maps such as viridis, e.g., the default colormap in matplotlib 2.0 default, which is also available for R via CRAN, or cube-helix (Green 2011), which is available in some astronomy software tools such as AIPS or Aladin. The public domain R statistical and graphical software environment has flexible color options with 657 color names and palettes based on the Color Brewer, Hue-Saturation-Value and Hue-Chroma-Luminance systems. See R guidelines here (PDF) and here (PDF). Use of unsaturated colors is recommended when symbols overlap in crowded diagrams this requires the PDF rather than EPS formats.
When we prepare the published version of your manuscript we may rearrange or resize the figures, so it is helpful if you can ensure that each figure or subfigure is in a separate file. If a figure is part of a lettered, multipart figure, place the letter within the box around the figure, not outside of it. If the letter cannot be placed within the box, lettered tags can be typeset. Page numbers, figure numbers, file information, etc., should not be included in figure files.
If you feel that figures in the published article must be sized or arranged in a certain way, please include a
file which describes your requirements the Production Editor may contact you about this when the manuscript is accepted. Note that extra charges will be incurred if you decide to make alterations to figures at proof stage.
Additional guidelines and tools include:
Fonts, lines, symbols Try to use only common fonts, such as Times, Helvetica, or Symbol, in figures. Spelling and use of numbers and units in figures should conform to usage in the body of the text and figure legends. A minimum of 6 pt. font size is acceptable. There should be consistency of appearance between the size of symbols and the size of type within a figure, and between the weight of the lines and the weight of type within the figures. Lines in figures should be at minimum 0.5 points, and if you use dotted or dashed lines you should check that the different sorts of lines are distinguishable when the figure is small. More on Color Accessibility The use of color as the only distinguishing delimiter in a figure should be generally avoided. Colored lines should also use different line styles colored symbols should be varied in shape, colored histograms use different hatching or weights. These types of choices greatly enhance the usability of a figure for a low-vision or color-blind reader or for a reader who can only utilize the resulting manuscript in greyscale. AASTeX Specific Advice Instructions for structuring and placing figures using AASTeX are available. Users of AASTeX 6.3+ will find a new LaTeX command interactive for tagging animated and interactive figures directly in their LaTeX files. Finding figures from AAS Journals articles: We have centralized all the figures in articles from all AAS Journals since
1997 on our Astronomy Image Explorer (AIE) tool. The figures from your final article are posted at the AIE at publication sans embargo. They are provided in high resolution JPG, PDF, and PPT formats. As a result the NASA ADS has integrated these graphics into their article landing pages (e.g., Mao et al. 2015) and link back to the AIE for each regular figure.
## Create a mosaic of the Moon
András Papp won the Our Moon category in the Insight Investment Astronomy Photographer of the Year 2015 competition. Here's how he did it.
This competition is now closed
Published: September 9, 2019 at 8:41 pm
W hen creating high-resolution lunar mosaics you need to plan your imaging session. It takes a long time to capture the required number of AVI movie files, so the position of the Moon in the sky changes significantly. Taking into account the field of view of your camera and the ability of your telescope, it’s always a good idea to capture more panes with greater overlap rather than miss one part of the Moon.
More on this is available in our guide on how to photograph the Moon.
Our starting AVI movie files were captured through a 5-inch telescope with a DMK 41 CCD camera.
For the panes of the sunlit side of the Moon’s disc you should ideally take thousands of frames, but hundreds of frames are enough for panes of the darker portion along the terminator.
For pre-processing, we’re going to use IRIS, a small but powerful freeware program.
Click File > AVI Conversion to open a dialog box and select the AVI movie file you would like to process, then load it into the program.
Next, you need to do a quality analysis and grade the single frames in a decreasing order. Use the Best Of and Select commands for this activity.
Once done, select Processing > Planetary Registration (1) from the top navigation menu.
In this menu, you can align the frames to the sharpest one. The next step is to stack the aligned frames by using the Add_Norm command. At this point, only the sharpening is missing.
Here a Van-Cittert deconvolution can be a useful technique. To apply that, use the Van-Cittert command.
You need to define two values in order to run it. The first value is the FWHM radius of the stacked image, the second value is the number of iterations.
The values of the Van-Cittert deconvolution are dependent on the sky conditions, therefore each time you apply it you need to find the best combination.
Save your work by using the Save TIFF command this will create a 16-bi t TIFF image in your working directory, which can be loaded in many programs for post-processing.
Process all of the panes with this same workflow for consistency.
You can use Photoshop to stitch the pre-processed panes together. First create a high-resolution base image with a black background.
Define its size based upon the scale of your captured panes. It is practical to start the stitching along the terminator where the most detail lies.
Open the first pre-processed pane, copy it as a new layer to the base image. Load another pane which is directly next to the previously opened one, and paste it to the same place as a new layer.
If you adjust the top layer blending mode to Difference, you can easily align the two panes by using the move tool. Once done, set the blending mode back to Normal.
Then create a mask on the top layer by clicking Layer > Layer Mask > Reveal All. On this mask, paint a gentle transition on the overlap area with the brush tool.
Edit the brightness and contrast as necessary using Levels or Curves, both of which can be found by first clicking Image > Adjustments.
Follow this stitching process until the daylight portion of the Moon’s disc is completed, then click Layer > Flatten Image.
Now centre a different file and using the same process, stitch together the dark side as well.
You’ll end up with two files, one of the night portion and one of the daylight portion of the lunar disc.
Copy the two halves into the same image but on different layers. The night portion needs to be the lowest layer.
Adjust the brightness of the dark side with Levels or Curves until you get a natural view of the Moon.
Now, hover your mouse over the sunlit side layer and generate a layer mask as described above.
Finally, use brush tool to create the silky transition on the layer mask along the terminator.
When you are happy with your result, click Layer > Flatten Image to merge the two layers.
This article originally appeared in the August 2016 issue of BBC Sky at Night Magazine. András Papp won the Our Moon category of 2015’s Insight Astronomy Photographer of the Year competition with this image.
|
# Introduction¶
The TiCkS (Timing and Clock Stamping) board has been developed as a proposed time-stamping board for CTA, based on the White Rabbit SPEC board (Simple PCIE Carrier). It should be installed in a CTA camera, and accepts trigger signals from the camera trigger electronics which it then time-stamps with nanosecond precision, sending the time-stamp and associated camera event data (TiCkS local event number, optional serial data provided by the camera) to the central DAQ trigger system (SWAT, SoftWare Array Trigger) for coincidence identification. This enables the DAQ to drop non-coincident events.
The TiCkS is a White Rabbit node, so is connected over a single mode fibre to a central White Rabbit switch, through which it is networked at 1 Gbps to standard ethernet. The White Rabbit switch and the WR PTP core firmware (v4.2) allows the TiCkS to be synchronized with the time on the White Rabbit switch, which should itself be synchronized to a central GPS system either directly or as a slave to a master WRS connected to the central GPS system.
The TiCkS firmware has in particular added the functionality to send these data by UDP over the White Rabbit fibre, among other functionalities described below. The TiCkS firmware can also be used as-is on a commercial SPEC board, using an FMC interface board which could an CTA 2xRJ45 FMC or a DIO board, for testing.
The TiCkS board exists in two versions, the first following the CTA UCTS interface interface definition [RD2] including the form-factor, power, and camera trigger interface of 2x RJ45 connectors, while the second has a FMC connector, and can be used with a SPEC DIO (Digital Input/Output). The latter will be placed on the OpenHardware repository, for free general use.
# White Rabbit Switch¶
White Rabbit Switch: https://www.ohwr.org/projects/white-rabbit/wiki/switch White Rabbit Switch Software: https://www.ohwr.org/project/wr-switch-sw/wikis/Release-v501
• User Manual
• Startup Guide
• Developer Manual
• SFPs for WRS:
Figure: View of the TiCkS board.
Figure: Schematic block diagram of the TiCkS set-up (on the TATS test-bench, Telescope Array Trigger S(t)imulator)
# Reference Documents¶
A
Title
Reference
Edition
RD1
Ticks board TDR
MLST-CAM-RP-0220-APC
4
RD2
CTA interface document camera to UCTS
I-ACTL-CAM-1000
1
# Features¶
• 1ns precision TDC
• UDP Tx/Rx (trigger timestamps sending and slow control)
• PPS (Pulse-per-Second) output, on the second marker in absolute time
• x MHz output (10MHz standard, other frequencies may be programmed, to be tested)
• SPI slave (to receive trigger type and other info from camera), 16 bits
• 10ms timeout on UDP Tx for low trigger rate (for bunch Tx), start after previous bunch sent
• 200ns minimum time between events (to avoid re-triggering on showers)
• Provides external trigger at programmable time (8ns granularity, <ns accuracy)
• SNMP-provided monitoring of parameters
# Operational States¶
• Synchronizing:
• On power-up, the TiCkS starts in the Synchronizing state and transitions to the Reset/Standby state once the WR clock is synchronized to the master.
• Reset/Standby:
• Arriving in the Reset/Standby state from Synchronizing, the TiCkS produces PPS pulses as soon as possible, as required for some cameras.
• On reception of a Reset command over UDP (see section 6) the TiCkS goes into this mode.
• In this state, no trigger inputs are accepted, and the event counters (read-out and busy counters) are set to zero
• On reception of a GetReady command, the TiCkS goes into Ready state
• The TiCkS does not accept triggers in this state
• The TiCkS waits for the next PPS, which send it into the Running state
• Running:
• On transition into Running state, the TiCkS immediately sends an “External Trigger” signal to the Camera trigger electronics, as a signal for it to zero its counters
• The TiCkS accepts triggers from the camera in Running state, and time-stamps them, including SPI if this is enabled
• On reception of a Reset command, the TiCkS goes back into Reset/Standby state
# Signal characteristics¶
The TiCkS input/output to the CTA Camera’s trigger electronics follows the CTA-defined interface [1], sending/receiving LVDS signals over 2 RJ-45 connectors.
All LVDS signals follow the standards ANSI/TIA/EIA-644-1995 and IEEE 1596.3 SCI-LVDS.
The trigger signal received for time-stamping should be longer than 20ns.
The 10MHz signal sent from the TiCkS should be synchronized to the 1PPS (pulse-per-second), to within ~8ns (arriving at or after the PPS). The duty-cycle is set to 50%.
The SPI is used in Master-Slave mode where the Camera is the master and the TiCkS is the slave, with 16 bits width. The clock polarity CPOL = 0 should be used (see https://en.wikipedia.org/wiki/Serial_Peripheral_Interface#/media/File:SPI_timing_diagram2.svg). If the SPI message does not arrive within 500ns, its value is set to 0xAA and the TiCkS ignores any late-arriving SPI messages.
The external trigger sent by the TiCkS is 40ns in length (can be modified with firmware resynthesis)
# Set up¶
This firmware has been developed to be used on stand-alone TiCkS used as a node. It also needs a White Rabbit Master: either a White Rabbit Switch connected to a PC or a SPEC in a PC’s PCIe slot. The first one is preferred as it is closer to the final implementation for CTA (indeed, the second can only be done on an outdated OS):
• PC with Scientific Linux 7 or CentOS 7 connected to a White Rabbit switch through copper SFP.
• TiCkS or SPEC+ RJ45-FMC connected to the WR switch through SFPs + optical fibre.
• DHCP server on the PC configured according to CTA documentation.
TiCkS has the standard CTA interface (2xRJ45) or one SPEC + 2xRJ45 FMC can be used. For the trigger signal input, TiCkS requires that this be >20ns in width in order for both the time-tagging and the increment of the event counter to function correctly.
Note that on Power-up, the TiCkS starts in the Reset/Standby state, and should produce PPS pulses as soon as possible (once the WR clock is synchronized to the master), as required for some cameras. Note: The UCTS will not produce PPS pulses unless and until it is synchronized to the master. This avoids having an unsynchronized PPS pulse being distributed.
# TiCkS configuration¶
Both Rx and Tx are implemented. Tx is used to send timestamps and some additional data (see data format section). Each UDP frame is at most 350 bytes, with the data payload of 12 bytes per event plus a bunch tailer of 20 bytes, for giving a payload size of are 308 bytes for 24 events. This size may be smaller if the Tx timeout is reached, with fewer events per bunch transmitted. Rx is used for the reset counters protocol and for slow control.
Some functionalities of the TiCkS board can be configured by sending a 64-bit word over UDP.
The 4 LSB bits choose the function and 60 other bits are the value to set (little endian)
• 0x”0” for run/reset counters
• 0x”1” to set the MAC address of the destination
• 0x”2” to generate an external trigger at a given date/time
• 0x”3” to set trigger throttle value
• 0x”4” to set IP destination address for data instead of that derived from the TiCkS IP address obtained from bootp
• 0x”5” to enable/disable SPI reception (in this case, the SPI data field is set to 0x0000)
• 0x”6” to change the destination port for data instead of the default of 55000 in TiCkS
Note: these commands can come from any IP, only the port is checked to be the correct one. The commands can be sent while the TiCkS is acquiring triggers, but as there is a very low probability that the command reception can interfere with data taking (as measured in tests) it is recommended to send these commands only in the Reset mode (except for the Reset command itself).
# UDP Network Configuration¶
The firmware uses bootp to get its IP address. The destination IP (PC connected to the switch running the CDTS-server or equivalent to receive the data) address is computed from this address. The first 22 bits of the TiCkS IP address obtained through bootp are kept, while the last 10 bits are replaced with 11-1111-1010 (3.250). For example if TiCkS obtained 10.10.128.99 IP address with bootp, packets from the TiCkS will be sent to 10.10.131.250.
Defaults are:
• TiCkS
• TiCkS Rx port (for receiving commands from DAQ) hard-coded currently to 55010
• MAC address is unique (from temperature sensor)
• DAQ PC or SPEC in PC
• TiCkS Tx port (for receiving time-stamps from TiCkS) default is 55000 (modifiable by command)
In addition, one needs to issue a 64-bit word through UDP each time TiCkS is powered up, to ensure that the TiCkS also receives the MAC destination address (PC address):
• 4 LSB are set to 0x1 (see previous) next 48 bits are MAC destination address
64 bits command example to set MAC destination address (CDTS server)
• if PC MAC address = 68:05:ca:3a:8f:28, send ‘FFF6805ca3a8f281’
# Counter Reset Procedure¶
A counter reset procedure has been implemented which uses UDP Rx capability of the TiCkS board. The counters for Event trigger, Busy trigger and PPS are the only ones involved in the reset procedure (i.e., when referring to “the counters” below). Note that the PPS and x MHz signals continue to be sent to the Camera during the reset procedure.
The default state at power-up is a reset state, with these internal counters of the TiCkS in reset, as is the TDC. No data are sent, only 20 bytes of bunch tailer with slow control information every 10ms (currently a hard-coded delay, can be modified with firmware resynthesis). To initialize counters and TDC, a “GetReady” command must be sent to the TiCkS over UDP.
• When 0xFFFFFFFFFFFFFFF0 is sent to TiCkS, this issues the command “Get Ready” described in the Reset procedure developed for LST/NectarCam, where the counters and TDC will start and an “External trigger” will be sent to the Camera just after the next PPS (#0). The PPS after that is counted as #1. The camera, having also been put into “Get Ready” mode by the controller, uses this external trigger signal to start its event, busy, and PPS counters.
• If 0xFFFFFFFFFFFFFF00 is sent, this issues the command “Reset/Standby”, where the counters will stop and be set to zero, and TDC will also stop.
The reset state can be checked by reading bit-14 in the bunch tailer, which is set to ‘1’ if TDC and counters are enabled, ‘0’ if stopped and set to zero.
# Generate an external trigger¶
An external trigger can be generated at a date (8ns granularity, <1ns accuracy) sending that date over UDP.
• 4 LSB are set to 0x2 (see previous), the next 52 bits are the date at which to generate an external trigger
• 28 bits from 4 to 31 are 8ns part of time, within the second
• 25 bits from 32 to 56 and is the seconds part of the date (TAI)
Note, the seconds part of the time can be generated easily by the Linux “date” command (!!as long as the DAQ PC is set to the current time, e.g. with NTP), such as:
date +%s # Current date in seconds
date +%s --date 'now' # same
date +%s --date '1 minute' # Date in 1 minute
date +%s --date '1 hour' # Date in one hour
date +%s --date '17:20:10' # Date at given time
date +%s --date '18 jan 2016 17:20:10 UTC' # Date at given date/time
...
date -ud @2000000000 # The inverse, convert Linux date to human-readable format
# Trigger Throttle¶
A trigger throttle, where events are not written to the FIFO if the time between the first and last event in the bunch is less than 200us (configurable, see above) has been implemented and tested. In other words, the TiCkS will keep writing in the FIFO if the time difference between first and last trigger of bunch > 200us (default). This default corresponds to the programmed value x”30D3” / 12488, which is 200us with clk_sys, at 62.5MHz. The throttle is currently enabled or disabled at the synthesis of the firmware.
Some cameras prefer to handle this in their trigger electronics, since now the B-xST-1285 requirement imposes a trigger rate cap on them (e.g. <30/14/1.2 kHz in 100ms for LST/MST/SST), so in future this may be programmable remotely (TBD).
# Data format¶
## Bunches of 24 events (i.e., up to 24, if no time-out; fewer if time-out)¶
308 bytes of data payload if no time-out (24 x 12 bytes + 20 bytes) SPI data width is 16 bits, and can contain Event-type data and other information (e.g. Stereo Trigger pattern, for those telescopes which have this). Each bunch is self-contained, and does not depend on the data in previous or following bunches, though the individual event information must be reconstructed using the information in the tailer.
## Event data, 96 bits (12 bytes)¶
Bits
Content
95 downto 80
SPI data 16 bits set to ‘0’ when there is no SPI input
79 downto 72
event read-out counter 8 bits LSB (max 255/bch)
71 downto 64
event busy counter 8 bits LSB (max 255/bch)
63 downto 62
PPS counter 2 bits LSB (max 4 in bch)
61 downto 60
seconds 2 bits LSB (max 4 in bch)
59 downto 59
1 flag, Busy
58 downto 58
1 flag, SPLL locked & local time sync to Master
57 downto 32
clk counter 26 bits (max 67M-clock, or 670MHz if 100ms bch time-out)
31 downto 4
8ns timetag, 28 bits
3
not used, set to ‘0’
2 downto 0
TDC 1ns precision 3 bits
## Bunch tailer, 160 bits (20 bytes)¶
Added to each bunch of 24 events (up to 25, if no time-out) before sending or sent after time-out for slow control if there is no trigger, so corresponds to the counters for the last event. (Modif 20190606, format v0.6: Except for the seconds, where this value is for the last read-out event, since this is the only common counter not duplicated for read-out / busy). Note: The tailer contains the full counter, not just the MSB. So if only the tailer is read, the final read-out and busy event counter can be known, without needing to decode the individual events.
bits
Content
159 downto 128
bunch counter 32 bits
127 downto 96
event counter 32bits
95 downto 64
busy counter 32 bits
63 downto 48
PPS counter 16bits (16bits total → 18h of PPS)
47 downto 16
seconds 32 bits (32bits total → 136yrs of seconds)
15
flag for tm_time_valid (SPLL lock & local time synch to Master)
14
flag for rst_cnt_ack (counters and TDC reset)
13 downto 8
not used, set to ‘0’
7 downto 0
version # (xxxx.yyyy for major.minor version)
## Description of the different fields¶
For the events within the bunch:
• “Seconds” is the 16 least significant bits (LSBs) of the WR time stamp (so it should be that part of TAI, for a WR-Switch up so that it is running an NTP client). If NTP is not running on the WR-Switch, it will be the LSB part of the seconds since the switch booted up (we recommend to set up NTP on your switch… see REFERENCE??).
• “PPS counter” is the count of the PPSs since the reset/GetReady of the TiCkS.
• “8ns tag”, is the time-stamp to within 8ns since the last PPS
• “1ns tag”, is the 1ns part of the time-stamp within the 8ns above.
• “Time valid”, is the value of that flag in the WR-node/TiCkS at the time of the time-stamp.
• “SPI” should be the pattern sent from the TIB to the TiCkS over the SPI link (so, telescope triggered pattern and event type). (to be tested in detail). Modif 20190606, format v0.6: If the TiCkS is set in more “no-SPI”, then “0x0000” is put. If the TiCkS is in mode SPI, but the SPI reception times-out, then “0xAAAA” is put:
For the tailer (besides the MSBs of some fields above):
• “Time valid” is the same as described above (it is repeated in the tailer, in case no triggers are coming in so that this can still be monitored).
• “Flag for counters & TDC enable reset”, is set if the counters and TDC are enabled, zero otherwise
# Receiving software¶
A prototype version of the CDTS-Server or Bridge software – which receives the UDP packets from the TiCkS-UCTS and sends them on via TCP to another address/port – is given on the CTA SVN server [3] and on git: https://gitlab.in2p3.fr/mdpunch/ticks. Further details are given in Appendix A. This software has been extensively tested. Alternatively, Wireshark or tcpdump can be used to capture the raw packets.
# SNMP¶
SNMP is now implemented in WRCore since v4, so it is possible to get status information from WR core itself (see WR user’s manual for details). SNMP can also get information from the TiCkS, with the information already provided in the tailer (counter values, PLL lock, reset status) as well as the destination IP and MAC address, and other values (To Be Defined).
This will allow the monitoring of the TiCkS from other points besides the PC which receives the data. The functionality is implemented, but parameters to be monitored and the monitoring software have to be defined.
In order to test SNMP functionality, follow the procedure as in the wrpc-user-manual-v4.2.pdf. Set the SNMP_OPT environment variable to use the appropriate “MIB” file, then execute the “snpmwalk” command with this, e.g.: SNMP_OPT=”-c public -v 2c -m WR-WRPC-AUX-DIAG-MIB -M +/home/cta/Documents/UDP_test/wr-coresv4.2/snmp 10.10.3.108” snmpwalk $SNMP_OPT wrpcCore If the card at the given address responds, it will give a list of values, for example the TemperatureValue. If the address does not respond, then either the card is not present at that address, or is not operational. CC needs to modify the values avaibable by SNMP, and also the MIB file for interpreting this. Besides the standard WR SNMP monitoring variables which can be found at TkXXXX ???, the specific variables which can be monitored for the TiCkS are as follows: • IP destination address • MAC destination address • port destination • readout event counter • busy event counter • firmware version (in next firmware version, after firmware commit 73b562a Tk: to be updated with version #) • throttler value (see above for default) • throttler implemented (at synthesis) • WR time valid • SPI enable # Future developments for Configuration & Operation¶ Some options for future developments are being explored, but we consider that the essential functionality of the TiCkS responds to the needs. # Time-scales considerations¶ The White Rabbit is an extension of PTP, so uses the same conventions as PTP. PTP typically uses the same epoch as Unix time (start of 1 January 1970).[note 1] While the Unix time is based on Coordinated Universal Time (UTC) and is subject to leap seconds, PTP is based on International Atomic Time (TAI). The PTP grandmaster communicates the current offset between UTC and TAI, so that UTC can be computed from the received PTP time. TAI and GPS time are strictly monotonic, with no gaps or overlaps due to leap seconds as un UTC. TAI is currently (in January 2020) ahead of UTC by 37 seconds. The zero of GPS time is defined as being 0h on 6-Jan-1980. TAI is always ahead of GPS by 19 seconds. For PC applications, we should use a linux kernel clock which is monotonic and/or based on TAI. So, the recommendation for PC applications is to use is to use a a linux kernel clock which is monotonic and/or based on TAI. Any recent NTP daemon can set the kernel’s TAI_OFFSET variable. Then we can use use clock_gettime(CLOCK_TAI, struct timespec *tp); (which is supported by linux kernel’s >3.10) or equivalent, to get the TAI time. As in the SWAT manual, the instructions for NTP are below: NTP’s configuration file (usually /etc/ntp.conf) must include the following directive : leapfile /usr/local/etc/leap-seconds.list Add a cronjob to periodically (once per day) update leapfile (https://hpiers.obspm.fr should be reach- able). The cronjob could be: #!/bin/bash # # this is : /usr/local/sbin/ntp-get-leapseconds-file.sh # # copy official TAI-TC leap seconds table # remember to add to /etc/ntp.conf line: # # leapfile /usr/local/etc/leap-seconds.list # # crontab job : # 3 14 * * * /usr/local/sbin/ntp-get-leapseconds-file.sh > /dev/null 2>&1 wget -O /tmp/leap-seconds.list.tmp https://hpiers.obspm.fr/iers/bul/bulc/ntp/leap-seconds.list if [ 0 =$? ]; then
echo "OK"
/bin/mv /tmp/leap-seconds.list.tmp /usr/local/etc/leap-seconds.list
/bin/chmod 644 /usr/local/etc/leap-seconds.list
chown root:root /usr/local/etc/leap-seconds.list
fi
# Firmware Versions¶
A table of the latest distributed firmware versions is given here: https://forge.in2p3.fr/projects/ctaactl/wiki/UCTS-TiCkS
After mid-2020, the new versions will have a firmware version number which will be accessible by SNMP, which will be added to this table.
All firmware versions are kept under version control, at: https://gitlab.in2p3.fr/cedric-champion/TiCkS_wr-coresv4/
# Updating of the Firmware (if needed)¶
Normally, the operation to update the firmware in the PROM flash memory should not be necessary, but if it is, the tools required are:
• Either:
• iMPACT software from Xilinx, or
• xc3sprog software (see below)
• a USB-blaster (USB to Jtag).
The versions of the TiCkS board distributed after spring 2020 use the Macronix MX25L3233FMI-08G flash memory which is pin-to-pin compatible with the obsolete M25P32 memory (as used on the SPEC board), see https://www.ohwr.org/projects/conv-ttl-rs485-hw/wiki/obsolete-components.
That website gives all the instructions for programming the board, either with the Xilinx iMPACT software or the much more convenient xc3sprog (which is planned to be used on the remote reprogramming board).
A branch of the xc3sprog software with a minor change to take into account the new flash can be found at: https://github.com/mdpunch/xc3sprog.
We note just a bug with the Platform Cable USB II from Xilinx. It is sometimes necessary to unplug/replug the Platform Cable USB II to reprogram the board, after a first test or a scan. (But, it works fine with the FDTI USB/JTAG chip which will be used on the remote reprogramming board)
When using Xilinx iMPACT, select the equivalent density Micron N25Q device instead of the desired Micron MT25Q device or Macronix device, see the Macronix Application Note AN-0245: http://www.macronix.com/Lists/ApplicationNote/Attachments/1903/AN0245V2%20-%20Using%20Macronix%20Serial%20Flash%20with%20Xilinx%20iMPACT%20Tools.pdf
To avoid the error message, set the following operating system environment variable
export XIL_IMPACT_SKIPIDCODECHECK=1
which instructs the iMPACT tool to bypass the ID check, allowing the programming operation to proceed.
Then launch Xilinx iMPACT tool.
## Obsolete¶
If you have an “mcs” file (“microcontroller series”), this can be flashed directly to the PROM in the standard Xilinx procedure, and specifying the PROM memory type (m25p32).
You may be starting from a bitstream “bit” file (which is a file which can be loaded directly into the FPGA and will run, but is lost when you power down). In this case you first have to create the “mcs” file. The procedure uses the PROM file formatted wizard in Xilinx Impact, and is also described with more relevant SPEC/TiCkS details in the White Rabbit core v3 user’s manual (below). Note that you need to add a “non configuration file” from the White Rabbit Core Collection to set up the filesystem on the PROM (using version sdbfs-standalone-160812.bin not version sdbfs-flash.bin on the OHW site).
# Tips and Tricks¶
Normally, the IP address for a given “telescope” should be fixed, which can be done by setting the bootp table such that the MAC address of the TiCkS card corresponds to the IP address desired for that telescope. Otherwise, bootp could be left to distribute available addresses, and the relation could be confusing.
• How to check what cards are available at what addresses / ports?
• sudo tcpdump -i enp4s0
• Gives list of packets on the relevant interface, UDP packets which are of length 308 are probably bunches from TiCkS cards
• sudo nmap -sP 10.10.3.250/24
• Here 10.10.3.250 is the address of the network card attached to the WR switch-over
• Gives MAC addresses and IPs for connected devices/cards
• How to look at the data from a particular card:
• sudo tcpdump -X -n -e -i enp4s0 host 10.10.3.107
• Gives data in various formats.
• How to see the terminal on the card (if USB is connected):
• sudo minicom -D /dev/ttyUSB0
• Then type “gui” (Esc to exit gui, Ctrl-A Q to quit minicom)
• Whichever USB device is relevant to the card (undefined a priori), but minicom gives the IP address then.
• How to see what’s on the etherbone registers map
• eb-ls udp/10.10.3.99
For example check IP address (obtained from DHCP server)
::
# Document History¶
Version
Date
Modification History
Pages/Chapter
0
2018-05-09
Creation
1.0
2018-07-01
First draft
1.1
2019-03-26
Updated some points, notably for command 0x”6” to change the port
1.2
2020-02-11
Added a section on “Operational states”
1.3
2020-02-21
Added a section on “Time-scales considerations”
1.4
2020-05-07
Added an appendix on “The Question of Calibration” Modified section on “Firmware update”
Appendix C Section “Firmware update”
|
# How are signal bandwidth and MSPS related?
In various software defined radios, there are three important parameters to set when receiving: frequency, bandwidth and MSPS (million samples per second?). What does it mean to receive with a triple of parameters $(freq, b, s)$ (freq-frequency, b-bandwidth, s-MSPS)? Radio listens to frequencies in $[f-b/2,f+b/2]$ range, divides this range into $s$ chunks and transmits the sampled data to the PC?
How to calculate $s$ for a given $b$, to avoid aliasing?
Am I right that MSPS has nothing to do with the Nyquist theorem?
The need for modulation:
Your voice signals generally lies in the frequency range 1kHz to 4kHz and music lies in the range 20Hz to 20Khz. Say your locality has 4-5 AM broadcast stations. Now all the AM broadcast stations cannot transmit their content as is because if all the stations transmit at the same time there will be signal interference. Also, for efficient radiation of electromagnetic energy the radiating antenna should be of the order of the wavelength of signal radiated. If the signal is 20Hz-20kHz the length of antenna has to be of the size of kilometers. So, they place the information carried by 20Hz-20kHz at higher frequencies. Every station places its their content in their own band of frequencies. Like one AM station will operate on 100Mhz with their band of operation being 100Mhz - 20Hz and 100Mhz+20Khz and the second one at 105Mhz with their band of operation being 105Mhz - 20Hz and 105Mhz+20Khz and so and so forth. The process of placing the frequency content from one spectrum on to another is called modulations. Now this information is radiated in to free space by an antenna at the station.
The receiver has a tuning knob, which lets you select a band of frequencies. Say, you have tuned your receiver to receive station 3 which is operating at 110 Mhz. You will select all the frequencies in the range 110Mhz - 20Hz and 110Mhz+20Khz. The signal being present in the spectrum 110Mhz - 20Hz and 110Mhz+20Khz is brought back to 20 Hz to 20kHz using a process which reverses the operation in encoder side. This is called demodulation. This signal is ready to hear.
For a simple modulation-demodulation read this
Finally, if you want to post-process the signal before listening like reducing the noise, increasing bass/treble, you have to filter the signal.
In a software defined radio, there will be a RF receiver which does the job of the receiver mentioned above. The message signal that was recovered is then converted to digital samples using an analog-to-digital converter. The analog to digital converter has 2 steps, 1: Sampling at rate above or equal to Nyquist and 2. Quantization. These digital samples are processed using a software on a pc/embedded system. You can apply filters on these samples. These digital/discrete samples is then converted to analog using a D/A Converter. Main step of a D/A converter is interpolation.
Theoretically, in an ideal system, it is sufficient to sample the signal at Nyquist rate. Before play back, an ideal low pass filter will recover the message/baseband signal with out any information loss. But such a design is not feasible.
At a low sampling rate, the interpolation filter (low-pass filter) will not be able to interpolate intermediate samples effectively. However if the sampling rate is increased you can have better reconstructed message signal. Hence at higher msps you must receive a better quality signal than at low msps. However, the quality you perceive will become negligible after certain point as your ear might not be able to negotiate the difference.
I believe lowering the msps below Nyquist can introduce cross talk. That doesn't mean that you will receive neighbor radio stations because they got clipped at the RF receiver stage. The cross talk you receive is because of frequency warping.
In the above description i am speculating that sampling happens on the demodulated signal not on the modulated signal. Because demodulation can be easily achieved via an envelope detector for AM or a phase locked loop for FM. Demodulation in digital domain can be costly as sampling a high frequency signal at nyquist rate and operating at such high bandwidths can be resource consuming.
Also no FM station operates at Ghz. Giga is for satellite. 100 Mhz for FM. and Khz for AM.
• How is it possible to capture 1GHz signal with 3.2MSPS? That's the max. sample rate of RTL SDR. – user5631 Oct 10 '13 at 8:18
• The 1Ghz is the carrier signal. Around 1Ghz, in few Khz you will have the message signal. The RF antenna when tunes in to the station will demodulate the message signal from the carrier signal. I.e., the message is recovered from carrier. The process of A to D conversion happens on the message signal which is of the order Khz for voice/music stations. – Ram Oct 10 '13 at 10:40
• @user5631 First of all, the link you posted is not working. The radios capture modulated signals through the antenna and as ram said will be demodulated before sampling can be applied. sampling is always applied in the received side after the demodulation when there is only message signal left. The USRP N210 series has a sampling rate of 100MSPS.So theoretically that means it can sample about 50 MHz of bandwidth. But then your computer processor has to be really fast to sample at that rate which is unlikely.So practically we achieve about 10 Mhz bandwidth signals using SDR, – Karan Talasila Oct 10 '13 at 13:51
• No system can sample a 1GHZ signal. It is impossible to design ADC's to sample at such high rate. When they talk about radios capturing modulated signals, it means the daughter board and antenna will receive a modulated electromagnetic signal at those frequency ranges. The sample rate is applied only after you get the message signal. You have to first understand the difference between bandwidth and carrier frequency. That will help you understand your confusion. – Karan Talasila Oct 10 '13 at 13:56
• I checked the link. It is clearly given that radio frequency range for example USRP B 210 is 50Mhz-6GHz which means that the radio can operate on modulated signals in range of 50Mhz-6GHz and not message signals of bandwidth 50Mhz-6GHZ, That's the difference you need to understand. Try doing one thing. on the USRP B210 try to send a signal in maybe 3khz. The flowgraph will not work and it will say error because transmission is not in frequency range. That is because you have to modulate the signal into the frequency band of operation and send it for the radio to work. – Karan Talasila Oct 10 '13 at 14:30
|
# Tot and colimits
This must be a well-known exercise with spectral sequences, but I don't know a reference for it. I'm trying to figure out when does $Tot$ commute with colimits.
More precisely, let $X$ be a double cochain complex of, say, $R$-modules, $R$ a commutative ring with unit, or, more generally, a double complex in an abelian category. Let $\cal{C}$ denote the category of these double cochain complexes.
We have two different total functors, $\mbox{Tot}^\prod$ and $\mbox{Tot}^{\bigoplus}$, from the category of double complexes to the category of cochain complexes:
$$\mbox{Tot}^{\prod}(X)^n = \prod_{p+q=n}X^{p,q} \qquad \mbox{and} \qquad \mbox{Tot}^{\bigoplus}(X)^n = \bigoplus_{p+q=n}X^{p,q} \quad .$$
Let $\mbox{Tot}$ denote anyone of them and let $I$ be a (filtered) category, and $X: I \longrightarrow \cal{C}$ a functor. We have a natural morphism
$$\theta: \varinjlim_i \mbox{Tot} (X_i) \longrightarrow \mbox{Tot} (\varinjlim_i X_i) \quad .$$
When dealing with $\mbox{Tot}^\bigoplus$, this $\theta$ is an isomorphism, because a direct sum is a colimit and colimits commute with colimits.
What happens when we take $\mbox{Tot}^\prod$? Is $\theta$ at least a quasi-isomorphism (a morphism inducing an isomorphism in cohomology)? In which cases? Do we need some extra hypothesis on the abelian category (AB...)? Is the hypothesis "filtered" really needed, or we can deal with arbitrary colimits in general?
Of course, if our double complex has finite diagonals, then $\mbox{Tot}^\prod = \mbox{Tot}^\bigoplus$, and we are done. But what happens without this hypothesis?
I'm mainly interested in the case of a right half-plane double complex, that is $X^{p,q} = 0$ if $p<0$, but I'll be glad to learn about all possible cases.
Any references or hints will be welcome.
-
Imagine that all double complexes in the image of your functor X: I → C have both differentials equal to zero. Moreover, all terms of these bicomplexes outside of a fixed diagonal are also zero. Then you are asking, quite simply, whether colimits commute with countable products. If they don't, your morphism θ cannot be a quasi-isomorphism (being a non-isomorphism of complexes with zero differentials). And of course, if in a certain abelian category countable filtered colimits commute with countable products (and both exist), then all objects of this category are zero.
-
Sorry, but why are necessarily all the objects zero? – Agusti Roig Jun 16 '10 at 0:56
Consider the set of diagrams D_n = (0->0->...->A->A->...) -- 0 on the first n positions and A on the subsequent ones, where A is a certain fixed object in our abelian category, and the maps between copies of A are the identity maps. If the countable filtered colimit commutes with the countable product for this set of diagrams, it simply means that the natural map from the coproduct of a countable set of copies of A to their product is an isomorphism. – Leonid Positselski Jun 16 '10 at 8:32
It is a standard fact (mentioned in Grothendieck's Tohoku paper) that in this case the object A must be zero. Basically, the isomorphism of the finite coproduct and the finite product in an additive category allows to add morphisms, and an isomorphism between the countable coproduct and product would allow to take countable sums of morphisms. In particular, there is a well-defined countable sum of copies of the identity endomorphism of A. This can be only non-contradictory when the identity endomorphism is zero. – Leonid Positselski Jun 16 '10 at 8:36
Ok, thank you very much! – Agusti Roig Jun 16 '10 at 8:40
I seem that instead have to consider the the diagram $D_n=(A\to A....\to A\to 0\to 0\to ...)$ definited as the yours but reciprocally changing $A$ by $0$, then If the countable filtered colimit commutes with the countable product for this set of diagrams you have that $0= \prod_n 0 = \prod_n colim_m D_{n, m} \cong colim_m \prod_n D_{n,m}= colim_m \prod_{i\leq m} A = \prod_n A$
-
|
# Difference between revisions of "SMHS AssociationTests"
## Scientific Methods for Health Sciences - Association Tests
### Overview
Measuring the association between two quantities is one of the most commonly applied tools researchers needed in studies. The term association implies on the possible correlation where two or more variables vary accordingly to some pattern. There are many statistical measures of association including relative ratio, odds ratio and absolute risk reduction. In this section, we are going to introduce measures of association in different studies.
### Motivation
In many cases, we need to measure if two quantities are associated with each other -- that is if two or more variables vary together according to some pattern. There are many statistical tools we can apply to measure the association between variables. How can we decide what types of measures we need to use? How do we interpret the test results? What does the test results imply about the association between the variables we studied?
### Theory
• Measures of Association: (1) relative measures $Relative\, risk=\frac{Cumulative\, incidence\, in\, exposed}{Cumulative\,incidence\, in \,unexposed}=ratio\, of\, risks =Risk\, Ratio$;
$Rate \,Ratio=\frac{Incidence\, rate\, in\, exposed} {Incidence\, rate\, in\, unexposed}$;
(2) difference: \$Efficacy=\frac{Cumulative\, incidence\, in \,placebo \,- \,Cumulative\, incidence\, in\, the\, treatement} {Cumulative\, incidence\, rate\, in\, placebo\, group}.
We are going to interpret the measurement results and conclude about the association between variables through examples in different types of trials.
• Chi-square test: a non-parametric test of statistical significance of two variables. It tests if the measured factor is associated with the members in one of two samples with chi square. For example, the chi-square test tests whether there is statistical evidence that the measured factor is not randomly distributed in the cases compared to the controls in a case-control study. The test statistic is 〖χ_o〗^2=∑_(i=1)^n▒(O_i-E_i )^2/E_i ~χ_df^2, where E_i is the expected frequency under the null hypothesis and O_i is the observed frequency, n is the number of cells in the table and df=(# rows-1)(# columns-1), E=(row total*column total)/(gran total). The null hypothesis is that there is no association between exposure group the disease studied.
• Conditions for validity of the χ^2test are:
• Design conditions
• for a goodness of fit, it must be reasonable to regard the data as a random sample of categorical observations from a large population.
• for a contingency table, it must be appropriate to view the data in one of the following ways: as two or more independent random samples, observed with respect to a categorical variable; as one random sample, observed with respect to two categorical variables.
• for either type of test, the observations within a sample must be independent of one another.
• Sample conditions: critical values only work if each expected value > 5
• Example of association: a study on the association of a particular gene and the risk of late onset disease. The data is summarized in the data table below:
|
# Python tests should be named test_*.py for pytest support
XMLWordPrintable
#### Details
• Type: RFC
• Status: Implemented
• Resolution: Done
• Component/s:
• Labels:
None
#### Description
Now that we’re beginning to use pytest to run unit tests, it makes sense to use pytest’s default test discovery so that simply running py.test (without arguments) will have pytest discover and run all tests in the tests/ directory. Currently we must type py.test tests/*.py.
We can achieve this simplicity through a coding standard that specifies all Python unit test modules must be named with the test_ pattern:
tests/test_example.py
(Our current standard is tests/testExample.py. The Python Style Guide implies this style, and the Python Unit Testing guide assumes this naming style.)
I think this is a straightforward coding style change that will make life easier in the long run.
Alternative
An alternative to changing the names of test modules, while still making py.test work without arguments is to add a configuration file to each repo that tells pytest how to find our test modules.
I don’t recommend this. I think it’s better just to use a correct and consistent naming schema.
#### Activity
Hide
John Parejko added a comment -
+1. Should this also trickle down to the test method names as well?
Also, this implies that things that aren't tests should not begin with test.
Show
John Parejko added a comment - +1. Should this also trickle down to the test method names as well? Also, this implies that things that aren't tests should not begin with test .
Hide
Tim Jenness added a comment -
I think the test method names are already using the right form for test discovery (via unittest convention) so I dont' think this RFC needs to go so far as changing the content of every single file.
I'm fine with a rename of the files, given that everyone already went and renamed a lot of them during the hack week. I think we are pretty safe assuming that all the files can be renamed as .py already works fine. I moved support code into sub directories in many cases to allow .py to work without bringing in support code but there might be a few left over.
Show
Tim Jenness added a comment - I think the test method names are already using the right form for test discovery (via unittest convention) so I dont' think this RFC needs to go so far as changing the content of every single file. I'm fine with a rename of the files, given that everyone already went and renamed a lot of them during the hack week. I think we are pretty safe assuming that all the files can be renamed as .py already works fine. I moved support code into sub directories in many cases to allow .py to work without bringing in support code but there might be a few left over.
Hide
Jonathan Sick added a comment - - edited
Should this also trickle down to the test method names as well?
In my experience, py.test will run the usual methods of a test class that inherits from unittest.TestCase. No need for change there (for pytest compatiblity).
If/when we switch from unittest to native pytest functions, then yes, we’d want to use a test_ prefix for those functions.
Also, this implies that things that aren't tests should not begin with test.
I think pytest would just find the module, but not see any unittest.TestCase subclasses, so there should be no direct harm. I haven’t tested or thought deeply about this. But in principle, yes, non-test modules in a tests/ directory shouldn’t look like tests to humans either.
Show
Jonathan Sick added a comment - - edited Should this also trickle down to the test method names as well? In my experience, py.test will run the usual methods of a test class that inherits from unittest.TestCase . No need for change there (for pytest compatiblity). If/when we switch from unittest to native pytest functions, then yes, we’d want to use a test_ prefix for those functions. Also, this implies that things that aren't tests should not begin with test. I think pytest would just find the module, but not see any unittest.TestCase subclasses, so there should be no direct harm. I haven’t tested or thought deeply about this. But in principle, yes, non-test modules in a tests/ directory shouldn’t look like tests to humans either.
Hide
Tim Jenness added a comment -
Jonathan Sick is correct. pytest will read the file, find no tests and ignore it.
Show
Tim Jenness added a comment - Jonathan Sick is correct. pytest will read the file, find no tests and ignore it.
Hide
Pim Schellart [X] (Inactive) added a comment -
It would greatly simplify my life if this change is not made for anything below (and including) afw until the pybind11 port is complete.
Show
Pim Schellart [X] (Inactive) added a comment - It would greatly simplify my life if this change is not made for anything below (and including) afw until the pybind11 port is complete.
Hide
Tim Jenness added a comment -
Jonathan Sick Assuming the agreement is for a file rename with no change in content, I am happy for this RFC to be adopted so long as the implementation ticket can be correctly blocked by Pim Schellart [X]'s pybind11 work. Pim Schellart [X] I assume it is okay to rename test files that are in packages that have no C++ code in them?
Show
Tim Jenness added a comment - Jonathan Sick Assuming the agreement is for a file rename with no change in content, I am happy for this RFC to be adopted so long as the implementation ticket can be correctly blocked by Pim Schellart [X] 's pybind11 work. Pim Schellart [X] I assume it is okay to rename test files that are in packages that have no C++ code in them?
Hide
Pim Schellart [X] (Inactive) added a comment -
Tim Jenness, that is probably safe. But not completely, it may still be that these packages required some minor changes to the tests if they depended on changed lower level constructs.
Show
Pim Schellart [X] (Inactive) added a comment - Tim Jenness , that is probably safe. But not completely, it may still be that these packages required some minor changes to the tests if they depended on changed lower level constructs.
Hide
Jonathan Sick added a comment - - edited
Pim Schellart [X], is there an epic I can properly block this RFC's possible implementation on?
Show
Jonathan Sick added a comment - - edited Pim Schellart [X] , is there an epic I can properly block this RFC's possible implementation on?
Hide
Pim Schellart [X] (Inactive) added a comment -
I suppose you can block it on DM-8467.
Show
Pim Schellart [X] (Inactive) added a comment - I suppose you can block it on DM-8467 .
Hide
Jonathan Sick added a comment -
I'm marking this as adopted as there are no concerns about the long-term value of this decision.
There are two implementation options for this sort of change:
1. Encourage developers to upgrade their packages as they go, or
2. Combine this implementation with other codebase changes (RFC-107 and numpydoc).
I think option 1 will work here since the change is so lightweight (a single git mv commit per repo). A good way to coordinate this will be with a wiki signup page, like we did for Python 3 and pytest.
Finally, I confirm that implementation is blocked on pybind11 work (see linked ticket). The implementor should also confirm with Pim Schellart [X] that the codebase is ready before going ahead.
Show
Jonathan Sick added a comment - I'm marking this as adopted as there are no concerns about the long-term value of this decision. There are two implementation options for this sort of change: Encourage developers to upgrade their packages as they go, or Combine this implementation with other codebase changes ( RFC-107 and numpydoc). I think option 1 will work here since the change is so lightweight (a single git mv commit per repo). A good way to coordinate this will be with a wiki signup page, like we did for Python 3 and pytest. Finally, I confirm that implementation is blocked on pybind11 work (see linked ticket). The implementor should also confirm with Pim Schellart [X] that the codebase is ready before going ahead.
Hide
Tim Jenness added a comment -
Who is going to coordinate the work of renaming if we are going to adopt the python3 approach? Have you talked to the relevant T/CAMs about ensuring that their developers are aware of this change?
Show
Tim Jenness added a comment - Who is going to coordinate the work of renaming if we are going to adopt the python3 approach? Have you talked to the relevant T/CAMs about ensuring that their developers are aware of this change?
Hide
Jonathan Sick added a comment -
This Confluence page is tracking the migration: https://confluence.lsstcorp.org/pages/viewpage.action?pageId=58950873
Show
Jonathan Sick added a comment - This Confluence page is tracking the migration: https://confluence.lsstcorp.org/pages/viewpage.action?pageId=58950873
Hide
Tim Jenness added a comment -
Jonathan Sick the work triggered by this RFC has been completed. Does this mean the RFC can be marked implemented? Are you waiting for the work on the confluence page to be completed (which I don't think is necessary, since this is a policy RFC).
Show
Tim Jenness added a comment - Jonathan Sick the work triggered by this RFC has been completed. Does this mean the RFC can be marked implemented? Are you waiting for the work on the confluence page to be completed (which I don't think is necessary, since this is a policy RFC).
Hide
Jonathan Sick added a comment -
That's true, I'll update the status to implemented.
Show
Jonathan Sick added a comment - That's true, I'll update the status to implemented.
#### People
Assignee:
Jonathan Sick
Reporter:
Jonathan Sick
Watchers:
John Parejko, Jonathan Sick, Pim Schellart [X] (Inactive), Tim Jenness
0 Vote for this issue
Watchers:
4 Start watching this issue
#### Dates
Created:
Updated:
Resolved:
Planned End:
#### Jenkins
No builds found.
|
# Pearl Index
The Pearl Index, also called the Pearl rate, is the most common technique used in clinical trials for reporting the effectiveness of a birth control method.
## Calculation and usage
$\mbox{Pearl-Index} = \frac{\mbox{Number of Pregnancies} \cdot 12} {\mbox{Number of Women} \cdot \mbox{Number of Months}} \cdot 100$
Three kinds of information are needed to calculate a Pearl Index for a particular study:
• The total number months or cycles of exposure by women in the study.
• The number of pregnancies.
• The reason for leaving the study (pregnancy or other reason).
There are two calculation methods for determining the Pearl Index:
In the first method, the relative number of pregnancies in the study is divided by the number of months of exposure, and then multiplied by 1200.
In the second method, the number of pregnancies in the study is divided by the number of menstrual cycles experienced by women in the study, and then multiplied by 1300. 1300 instead of 1200 is used on the basis that the length of the average menstrual cycle is 28 days, or 13 cycles per year.
The Pearl Index is sometimes used as a statistical estimation of the number of unintended pregnancies in 100 woman-years of exposure (e.g. 100 women over one year of use, or 10 women over 10 years). It is also sometimes used to compare birth control methods, a lower Pearl index representing a lower chance of getting unintentionally pregnant.
Usually two Pearl Indexes are published from studies of birth control methods:
• Actual use Pearl Index, which includes all pregnancies in a study and all months (or cycles) of exposure.
• Perfect use or Method Pearl Index, which includes only pregnancies that resulted from correct and consistent use of the method, and only includes months or cycles in which the method was correctly and consistently used.
## History
The index was introduced by Raymond Pearl in 1933.[1] It has remained popular for over eighty years, in large part because of the simplicity of the calculation.
## Criticisms
Like all measures of birth control effectiveness, the Pearl Index is a calculation based on the observations of a given sample population. Thus studies of different populations using the same contraceptive will yield different values for the index. The culture and demographics of the population being studied, and the instruction technique used to teach the method, have significant effects on its failure rate.[2][3]
The Pearl Index has unique shortcomings, however. It assumes a constant failure rate over time. That is an incorrect assumption for two reasons: first, the most fertile couples will get pregnant first. Couples remaining later in the study are, on average, of lower fertility. Second, most birth control methods have better effectiveness in more experienced users. The longer a couple is in the study, the better they are at using the method. So the longer the study length, the lower the Pearl Index will be - and comparisons of Pearl Indexes from studies of different lengths cannot be accurate.
The Pearl Index also provides no information on factors other than accidental pregnancy which may influence effectiveness calculations, such as:
• Dissatisfaction with the method
• Trying to achieve pregnancy
• Medical side effects
• Being lost to follow-up
A common misperception is that the highest possible Pearl Index is 100 - i.e. 100% of women in the study conceive in the first year. However, if all the women in the study conceived in the first month, the study would yield a Pearl Index of 1200 or 1300. The Pearl Index is only accurate as a statistical estimation of per-year risk of pregnancy if the pregnancy rate in the study was very low.
In 1966, two birth control statisticians advocated abandonment of the Pearl Index:[4][5]
[The Pearl Index] does not serve as an estimator of any quantity of interest, and comparisons between groups may be impossible to interpret... The superiority of life table methods or other estimators that do not assume a constant hazard rate seems clear.[6]
## Footnotes
1. ^ Pearl, Raymond (1933). "Factors in human fertility and their statistical evaluation". Lancet 222 (5741): 607–611. doi:10.1016/S0140-6736(01)18648-4.
2. ^ Trussell J, Hatcher RA, Cates W et al. (1990). "A guide to interpreting contraceptive efficacy studies". Obstetrics and Gynecology 76 (3 Pt 2): 558–567. PMID 2199875.
3. ^ Trussell J (1991). "Methodological pitfalls in the analysis of contraceptive failure". Statistics in medicine 10 (2): 201–220. doi:10.1002/sim.4780100206. PMID 2052800.
4. ^ Sheps MC (1966). "Characteristics of a ratio used to estimate failure rates: occurrences per person year of exposure". Biometrics (Biometrics, Vol. 22, No. 2) 22 (2): 310–321. doi:10.2307/2528521. JSTOR 2528521. PMID 5961447.
5. ^ Potter RG (1966). "Application of life table techniques to measurement of contraceptive effectiveness". Demography (Demography, Vol. 3, No. 2) 3 (2): 297–304. doi:10.2307/2060159. JSTOR 2060159. PMID 21318704.
6. ^ Kippley, John; Sheila Kippley (1996). The Art of Natural Family Planning (4th addition ed.). Cincinnati, OH: The Couple to Couple League. pp. 140–141. ISBN 0-926412-13-2., which cites:
|
## Precalculus: Mathematics for Calculus, 7th Edition
Fill the blanks with y,$\qquad$x,$\qquad$and$\displaystyle \qquad \frac{y}{x}$
See p. 409, Definition of the Trigonometric Functions Let $P(x, y)$ be the terminal point on the unit circle determined by the real number $t$. Then for nonzero values of the denominator the trigonometric functions are defined as follows. $\sin t=y \qquad \cos t=x\qquad \displaystyle \tan t=\frac{y}{x}$ $\displaystyle \csc t=\frac{1}{y}\qquad \displaystyle \sec t=\frac{1}{x}\qquad \displaystyle \cot t=\frac{x}{y}$ ------------------ Fill the blanks with y,$\qquad$x,$\qquad$and$\displaystyle \qquad \frac{y}{x}$
|
Overview - Maple Help
Home : Support : Online Help : Mathematics : Finance : Date Arithmetic : Finance/DayCounters
Finance Package Commands For Day Counting
Overview
The Financial Modeling package supports most standard day count conventions used in the industry, which include Actual/Actual, Actual/360 and 30/360 conventions.
Day counting convention defines the way in which interest accrues over time. Generally, we know the interest earned over some reference period, (for example, the time between coupon payments), and we are interested in calculating the interest earned over some other period.
The day counting convention is usually expressed as $\mathbit{X}\mathbf{/}\mathbit{Y}$, where $\mathbit{X}$ defines the way in which the number of days between the two dates is calculated, and $\mathbit{Y}$ defines the way in which the total number of days in the reference period is measured. The interest earned between the two dates is
$\frac{\mathbit{Number_of_days_between_two_dates}}{\mathbit{Number_of_days_in_reference_period}}\mathbf{×}{\mathbit{Interest}}_{}\mathbit{_earned_in_reference_period}$
Three day counting conventions commonly used in the United States are
• Actual/Actual
• Actual/360
• 30/360
- return the number of days between two dates according to a given convention - return the interval between two dates as a fraction of a year according to a given convention
Day Count Conventions
Actual/Actual Conventions
The actual/actual interest accrual convention is recommended for euro-denominated bonds. There are at least three different interpretations of actual/actual. These three interpretations are identified as:
• Actual/Actual (ISDA)
• Actual/Actual (ISMA)
• Actual/Actual (AFB)
The difference between the ISDA, ISMA and AFB methods can be reduced to a consideration of the denominator to be used when calculating accrued interest. In all three cases, the numerator will be equal to the actual number of days from (and including) the last coupon payment date or period end date, to (but excluding) the current value date or period end date.
Under the Actual/Actual (ISDA) approach, the denominator varies depending on whether a portion of the relevant calculation period falls within a leap year. For the portion of the calculation period falling within a leap year, the denominator is 366, for the other portion the denominator is 365. The ISDA convention is also known as Actual/Actual (Historical), Actual/Actual, Act/Act, and according to ISDA also Actual/365, Act/365, and A/365.
Under the Actual/Actual (ISMA) approach, the denominator is the actual number of days in the coupon period multiplied by the number of coupon periods in the year. The ISMA and US Treasury convention is also known as Actual/Actual (Bond).
Under the Actual/Actual (AFB) approach, the denominator is either 365 if the calculation period does not contain February 29th, or 366 if the calculation period includes February 29th. The AFB convention is also known as actual/actual (Euro).
Consider some examples:
> $\mathrm{with}\left(\mathrm{Finance}\right):$
First you will use a day counter that follows the ISDA convention.
> $\mathrm{DayCount}\left("Jan-01-2006","July-01-2006",\mathrm{ISDA}\right)$
${181}$ (2.1.1)
The numerator is equal to the actual number of days from (and including) the last coupon payment date or period end date, to (but excluding) the current value date or period end date. Therefore, the number of days from January 1st, 2006 to July 1st, 2006 can be calculated by adding the number of days in January, February, March, April, May, and June together:
> $31+28+31+30+31+30$
${181}$ (2.1.2)
> $\mathrm{YearFraction}\left("Jan-01-2006","July-01-2006",\mathrm{ISDA}\right)$
${0.4958904110}$ (2.1.3)
The denominator for ISDA is 365 since the year of 2006 is not a leap year:
> $\frac{\mathrm{DayCount}\left("Jan-01-2006","July-01-2006",\mathrm{ISDA}\right)}{365}$
$\frac{{181}}{{365}}$ (2.1.4)
> $\mathrm{evalf}\left(\right)$
${0.4958904110}$ (2.1.5)
> $\mathrm{DayCount}\left("Jan-01-2008","April-20-2008",\mathrm{ISDA}\right)$
${110}$ (2.1.6)
> $\mathrm{YearFraction}\left("Jan-01-2008","April-20-2008",\mathrm{ISDA}\right)$
${0.3005464481}$ (2.1.7)
> $\frac{\mathrm{DayCount}\left("Jan-01-2008","April-20-2008",\mathrm{ISDA}\right)}{366}$
$\frac{{55}}{{183}}$ (2.1.8)
> $\mathrm{evalf}\left(\right)$
${0.3005464481}$ (2.1.9)
>
${256}$ (2.1.10)
>
${0.6994535519}$ (2.1.11)
>
$\frac{{128}}{{183}}$ (2.1.12)
> $\mathrm{evalf}\left(\right)$
${0.6994535519}$ (2.1.13)
In the second example you will use the ISMA convention.
> $\mathrm{DayCount}\left("Jan-01-2006","July-01-2006",\mathrm{ISMA}\right)$
${181}$ (2.1.14)
As you can see the number of days between January 1st, 2006 and July 1st, 2006 is the same according to both conventions. However, the length of the period from January 1st, 2006 to July 1st, 2006 as a fraction of the year is different.
> $\mathrm{YearFraction}\left("Jan-01-2006","July-01-2006",\mathrm{ISMA}\right)$
${0.5000000000}$ (2.1.15)
The denominator is the actual number of days in the coupon period multiplied by the number of coupon periods in the year.
> $\mathrm{DayCount}\left("Jan-01-2008","April-20-2008",\mathrm{ISMA}\right)$
${110}$ (2.1.16)
> $\mathrm{YearFraction}\left("Jan-01-2008","April-20-2008",\mathrm{ISMA}\right)$
${0.3333333333}$ (2.1.17)
> $\mathrm{DayCount}\left("Jan-01-2008","April-01-2008",\mathrm{ISMA}\right)$
${91}$ (2.1.18)
> $\mathrm{YearFraction}\left("Jan-01-2008","April-01-2008",\mathrm{ISMA}\right)$
${0.2500000000}$ (2.1.19)
Finally, consider the AFB day counting convention.
> $\mathrm{DayCount}\left("Jan-01-2006","July-01-2006",\mathrm{AFB}\right)$
${181}$ (2.1.20)
> $\mathrm{YearFraction}\left("Jan-01-2006","July-01-2006",\mathrm{AFB}\right)$
${0.4958904110}$ (2.1.21)
The denominator is either 365 if the calculation period does not include February 29th, or 366 if the calculation period includes February 29th.
> $\frac{\mathrm{DayCount}\left("Jan-01-2006","July-01-2006",\mathrm{AFB}\right)}{365}$
$\frac{{181}}{{365}}$ (2.1.22)
> $\mathrm{evalf}\left(\right)$
${0.4958904110}$ (2.1.23)
> $\mathrm{DayCount}\left("Jan-01-2008","April-20-2008",\mathrm{AFB}\right)$
${110}$ (2.1.24)
> $\mathrm{YearFraction}\left("Jan-01-2008","April-20-2008",\mathrm{AFB}\right)$
${0.3005464481}$ (2.1.25)
> $\frac{\mathrm{DayCount}\left("Jan-01-2008","April-20-2008",\mathrm{AFB}\right)}{366}$
$\frac{{55}}{{183}}$ (2.1.26)
> $\mathrm{evalf}\left(\right)$
${0.3005464481}$ (2.1.27)
>
${256}$ (2.1.28)
>
${0.7013698630}$ (2.1.29)
>
$\frac{{128}}{{183}}$ (2.1.30)
> $\mathrm{evalf}\left(\right)$
${0.6994535519}$ (2.1.31)
>
$\frac{{256}}{{365}}$ (2.1.32)
> $\mathrm{evalf}\left(\right)$
${0.7013698630}$ (2.1.33)
|
Orbital period
(Redirected from Synodic period)
For the music album, see Orbital Period (album).
The orbital period is the time taken for a given object to make one complete orbit around another object.
When mentioned without further qualification in astronomy this refers to the sidereal period of an astronomical object, which is calculated with respect to the stars.[not verified in body]
There are several kinds of orbital periods for objects around the Sun, or other celestial objects.
Varieties of orbital periods
Orbital period is an approximated term, and can mean any of several periods, each of which is used in the fields of astronomy and astrophysics:[citation needed]
• The sidereal period is the temporal cycle that it takes an object to make a full orbit, relative to the stars. This is the orbital period in an inertial (non-rotating) frame of reference.
• The synodic period is the temporal interval that it takes for an object to reappear at the same point in relation to two or more other objects, e.g. when the Moon relative to the Sun as observed from Earth returns to the same illumination phase. The synodic period is the time that elapses between two successive conjunctions with the Sun–Earth line in the same linear order. The synodic period differs from the sidereal period due to the Earth's orbiting around the Sun.
• The draconitic period, or draconic period, is the time that elapses between two passages of the object through its ascending node, the point of its orbit where it crosses the ecliptic from the southern to the northern hemisphere. This period differs from the sidereal period because both the orbital plane of the object and the plane of the ecliptic precess with respect to the fixed stars, so their intersection, the line of nodes, also precesses with respect to the fixed stars. Although the plane of the ecliptic is often held fixed at the position it occupied at a specific epoch, the orbital plane of the object still precesses causing the draconitic period to differ from the sidereal period.
• The anomalistic period is the time that elapses between two passages of an object at its periapsis (in the case of the planets in the solar system, called the perihelion), the point of its closest approach to the attracting body. It differs from the sidereal period because the object's semimajor axis typically advances slowly.
• Also, the Earth's tropical period (or simply its "year") is the time that elapses between two alignments of its axis of rotation with the Sun, also viewed as two passages of the object at right ascension zero. One Earth year has a slightly shorter interval than the solar orbit (sidereal period) because the inclined axis and equatorial plane slowly precesses (rotates in sidereal terms), realigning before orbit completes with an interval equal to the inverse of the precession cycle (about 25,770 years).
Small body orbiting a central body
According to Kepler's Third Law, the orbital period T (in seconds) of two bodies orbiting each other in a circular or elliptic orbit is:[citation needed]
${\displaystyle T=2\pi {\sqrt {\frac {a^{3}}{\mu }}}}$
where:
For all ellipses with a given semi-major axis the orbital period is the same, regardless of eccentricity.
Inversely, for calculating the distance where a body has to orbit in order to pulse a given orbital period:
${\displaystyle a={\sqrt[{3}]{\frac {GMT^{2}}{4\pi ^{2}}}}}$
where:
For instance, for completing an orbit every 24 hours around a mass of 100 kg, a small body has to orbit at a distance of 1.08 meters from its center of mass.
Orbital period as a function of central body's density
When a very small body is in a circular orbit barely above the surface of a sphere of any radius and mean density ρ (in kg/m3), the above equation simplifies to (since M = = 4/3πa3ρ):[citation needed]
${\displaystyle T={\sqrt {\frac {3\pi }{G\rho }}}}$
So, for the Earth as the central body (or any other spherically symmetric body with the same mean density, about 5,515 kg/m3)[1] we get:
T = 1.41 hours
and for a body made of water (ρ ≈ 1,000 kg/m3)[2]
T = 3.30 hours
Thus, as an alternative for using a very small number like G, the strength of universal gravity can be described using some reference material, like water: the orbital period for an orbit just above the surface of a spherical body of water is 3 hours and 18 minutes. Conversely, this can be used as a kind of "universal" unit of time if we have a unit of mass, a unit of length and a unit of density.
Two bodies orbiting each other
In celestial mechanics, when both orbiting bodies' masses have to be taken into account, the orbital period T can be calculated as follows:[3]
${\displaystyle T=2\pi {\sqrt {\frac {a^{3}}{G\left(M_{1}+M_{2}\right)}}}}$
where:
• a is the sum of the semi-major axes of the ellipses in which the centers of the bodies move, or equivalently, the semi-major axis of the ellipse in which one body moves, in the frame of reference with the other body at the origin (which is equal to their constant separation for circular orbits),
• M1 + M2 is the sum of the masses of the two bodies,
• G is the gravitational constant.
Note that the orbital period is independent of size: for a scale model it would be the same, when densities are the same (see also Orbit#Scaling in gravity).[citation needed]
In a parabolic or hyperbolic trajectory, the motion is not periodic, and the duration of the full trajectory is infinite.[citation needed]
Synodic period
When two bodies orbit a third body in different orbits, and thus different orbital periods, their respective, synodic period can be found. If the orbital periods of the two bodies around the third are called P1 and P2, so that P1 < P2, their synodic period is given by
${\displaystyle {\frac {1}{P_{\mathrm {syn} }}}={\frac {1}{P_{1}}}-{\frac {1}{P_{2}}}}$
Examples of sidereal and synodic periods
Table of synodic periods in the Solar System, relative to Earth:[citation needed]
Object Sidereal period (yr) Synodic period (yr) Synodic period (d)
Mercury 0.240846 (87.9691 days) 0.317 115.88
Venus 0.615 (225 days) 1.599 583.9
Earth 1 (365.25636 solar days) — —
Moon 0.0748 (27.32 days) 0.0809 29.5306
99942 Apophis (near-Earth asteroid) 0.886 7.769 2,837.6
Mars 1.881 2.135 779.9
4 Vesta 3.629 1.380 504.0
1 Ceres 4.600 1.278 466.7
10 Hygiea 5.557 1.219 445.4
Jupiter 11.86 1.092 398.9
Saturn 29.46 1.035 378.1
Uranus 84.01 1.012 369.7
Neptune 164.8 1.006 367.5
134340 Pluto 248.1 1.004 366.7
136199 Eris 557 1.002 365.9
90377 Sedna 12050 1.00001 365.1[citation needed]
In the case of a planet's moon, the synodic period usually means the Sun-synodic period, namely, the time it takes the moon to complete its illumination phases, completing the solar phases for an astronomer on the planet's surface. The Earth's motion does not determine this value for other planets because an Earth observer is not orbited by the moons in question. For example, Deimos's synodic period is 1.2648 days, 0.18% longer than Deimos's sidereal period of 1.2624 d.[citation needed]
Binary stars
Binary star Orbital period
AM Canum Venaticorum 17.146 minutes
Beta Lyrae AB 12.9075 days
Alpha Centauri AB 79.91 years
Proxima CentauriAlpha Centauri AB 500,000 years or more
|
Contests Virtual Contests Problems Submit Runs Status Rank List Forum
2556. Football Foundation (FOFO)
Time Limit: 0.5 Seconds Memory Limit: 65536K
Total Runs: 4945 Accepted Runs: 1674
The football foundation (FOFO) has been researching on soccer; they created a set of sensors to describe the ball behavior based on a grid uniformly distributed on the field. They found that they could predict the ball movements based on historical analysis. Each square sensor of the grid can detect the following patterns:
N north (up the field)
S south (south the field)
E east (to the right on the field)
W west (to the left on the field)
For example, in grid 1, suppose the ball was thrown into the field from north side into the field. The path the sensors detected for this movement follows as shown. The ball went through 10 sensors before leaving the field.
Comparing with what happened on grid 2, the ball went through 3 sensors only once, and started a loop through 8 instructions and never exits the field.
You are selected to write a program in order to evaluate line judges job, with the following out put based on each grid of sensors, the program needs to determine how long it takes to the ball to get out of the grid or how the ball loops around.
### Input
There will be one or more grids of sensors for the same game. The data for each is in the following form. On the first line are three integers separated by blanks: The number of rows in the grid, the number of columns in the grid, and the number of the column in which the ball enters from the north. The grid column's number starts with one at the left. Then come the rows of direction instructions. The lines of instructions contain only the characters N, S, E or W, with no blanks. The end of input is indicated by a grid containing 0 0 0 as limits.
### Output
For each grid in the input there is one line of output. Either the ball follows a certain number of sensors and exits the field on any one of the four sides or else the ball follows the behavior on some number of sensors repeatedly. The sample input below corresponds to the two grids above and illustrates the two forms of output. The word "step" is always immediately followed by "(s)" whether or not the number before is 1.
### Sample Input
3 6 5
NEESWE
WWWESS
SNWWWW
4 5 1
SESWE
EESNW
NWEEN
EWSEN
0 0 0
### Sample Output
10 step(s) to exit
3 step(s) before a loop of 8 step(s)
Source: Mexico and Central America 2006
Submit List Runs Forum Statistics
Tianjin University Online Judge v1.3.0
Maintance:Fxz. Developer: SuperHacker, G.D.Retop, Fxz
|
• ### Improved Search for Heavy Neutrinos in the Decay $\pi\rightarrow e\nu$(1712.03275)
March 26, 2018 hep-ex
A search for massive neutrinos has been made in the decay $\pi\rightarrow e^+ \nu$. No evidence was found for extra peaks in the positron energy spectrum indicative of pion decays involving massive neutrinos ($\pi\rightarrow e^+ \nu_h$). Upper limits (90 \% C.L.) on the neutrino mixing matrix element $|U_{ei}|^2$ in the neutrino mass region 60--135 MeV/$c^2$ were set, which are %representing an order of magnitude improvement over previous results.
• ### A new search for the $K_{L} \to \pi^0 \nu \overline{\nu}$ and $K_{L} \to \pi^{0} X^{0}$ decays(1609.03637)
Dec. 28, 2016 hep-ex
We searched for the $CP$-violating rare decay of neutral kaon, $K_{L} \to \pi^0 \nu \overline{\nu}$, in data from the first 100 hours of physics running in 2013 of the J-PARC KOTO experiment. One candidate event was observed while $0.34\pm0.16$ background events were expected. We set an upper limit of $5.1\times10^{-8}$ for the branching fraction at the 90\% confidence level (C.L.). An upper limit of $3.7\times10^{-8}$ at the 90\% C.L. for the $K_{L} \to \pi^{0} X^{0}$decay was also set for the first time, where $X^{0}$ is an invisible particle with a mass of 135 MeV/$c^{2}$.
• ### Long-lived neutral-kaon flux measurement for the KOTO experiment(1509.03386)
Jan. 7, 2016 hep-ex, physics.ins-det
The KOTO ($K^0$ at Tokai) experiment aims to observe the CP-violating rare decay $K_L \rightarrow \pi^0 \nu \bar{\nu}$ by using a long-lived neutral-kaon beam produced by the 30 GeV proton beam at the Japan Proton Accelerator Research Complex. The $K_L$ flux is an essential parameter for the measurement of the branching fraction. Three $K_L$ neutral decay modes, $K_L \rightarrow 3\pi^0$, $K_L \rightarrow 2\pi^0$, and $K_L \rightarrow 2\gamma$ were used to measure the $K_L$ flux in the beam line in the 2013 KOTO engineering run. A Monte Carlo simulation was used to estimate the detector acceptance for these decays. Agreement was found between the simulation model and the experimental data, and the remaining systematic uncertainty was estimated at the 1.4\% level. The $K_L$ flux was measured as $(4.183 \pm 0.017_{\mathrm{stat.}} \pm 0.059_{\mathrm{sys.}}) \times 10^7$ $K_L$ per $2\times 10^{14}$ protons on a 66-mm-long Au target.
• ### Status of the TRIUMF PIENU Experiment(1509.08437)
Oct. 2, 2015 hep-ex, physics.ins-det
The PIENU experiment at TRIUMF aims to measure the pion decay branching ratio $R={\Gamma}({\pi}^+{\rightarrow}e^+{\nu}_e({\gamma}))/{\Gamma}({\pi}^+{\rightarrow}{\mu}^+{\nu}_{\mu}({\gamma}))$ with precision $<0.1$% to provide a sensitive test of electron-muon universality in weak interactions. The current status of the PIENU experiment is presented.
• ### Improved measurement of the $\pi \rightarrow \mbox{e} \nu$ branching ratio(1506.05845)
Aug. 12, 2015 hep-ex
A new measurement of the branching ratio, $R_{e/\mu} =\Gamma (\pi^+ \rightarrow \mbox{e}^+ \nu + \pi^+ \rightarrow \mbox{e}^+ \nu \gamma)/ \Gamma (\pi^+ \rightarrow \mu^+ \nu + \pi^+ \rightarrow \mu^+ \nu \gamma)$, resulted in $R_{e/\mu}^{exp} = (1.2344 \pm 0.0023 (stat) \pm 0.0019 (syst)) \times 10^{-4}$. This is in agreement with the standard model prediction and improves the test of electron-muon universality to the level of 0.1 %.
• ### Detector for measuring the $\pi^+\to e^+\nu_e$ branching fraction(1505.02737)
May 11, 2015 hep-ex, physics.ins-det
The PIENU experiment at TRIUMF is aimed at a measurement of the branching ratio $R^{e/\mu}$ = ${\Gamma\big((\pi^{+} \rightarrow e^{+} \nu_{e}) + (\pi^{+} \rightarrow e^{+} \nu_{e}\gamma)\big)}/{\Gamma\big((\pi^{+} \rightarrow \mu^{+} \nu_{\mu})+(\pi^{+} \rightarrow \mu^{+} \nu_{\mu}\gamma)\big)}$ with precision $<$0.1\%. Incident pions, delivered at the rate of 60 kHz with momentum 75 MeV/c, were degraded and stopped in a plastic scintillator target. Pions and their decay product positrons were detected with plastic scintillators and tracked with multiwire proportional chambers and silicon strip detectors. The energies of the positrons were measured in a spectrometer consisting of a large NaI(T$\ell$) crystal surrounded by an array of pure CsI crystals. This paper provides a description of the PIENU experimental apparatus and its performance in pursuit of $R^{e/\mu}$.
• ### Report of the Quark Flavor Physics Working Group(1311.1076)
Dec. 9, 2013 hep-ph, hep-ex, hep-lat
This report represents the response of the Intensity Frontier Quark Flavor Physics Working Group to the Snowmass charge. We summarize the current status of quark flavor physics and identify many exciting future opportunities for studying the properties of strange, charm, and bottom quarks. The ability of these studies to reveal the effects of new physics at high mass scales make them an essential ingredient in a well-balanced experimental particle physics program.
• The Proceedings of the 2011 workshop on Fundamental Physics at the Intensity Frontier. Science opportunities at the intensity frontier are identified and described in the areas of heavy quarks, charged leptons, neutrinos, proton decay, new light weakly-coupled particles, and nucleons, nuclei, and atoms.
• ### Study of the K0(L) --> pi0 pi0 nu nu-bar decay(1106.3404)
Sept. 9, 2011 hep-ex
The rare decay K0(L) --> pi0 pi0 nu nu-bar was studied with the E391a detector at the KEK 12-GeV proton synchrotron. Based on 9.4 x 10^9 K0L decays, an upper limit of 8.1 x 10^{-7} was obtained for the branching fraction at 90% confidence level. We also set a limit on the K0(L) --> pi0 pi0 X (X --> invisible particles) process; the limit on the branching fraction varied from 7.0 x 10^{-7} to 4.0 x 10^{-5} for the mass of X ranging from 50 MeV/c^2 to 200 MeV/c^2.
• ### Search for the decay $K_L^0 \rightarrow 3\gamma$(1011.4403)
Nov. 22, 2010 hep-ex
We performed a search for the decay $K_L^0 \rightarrow 3\gamma$ with the E391a detector at KEK. In the data accumulated in 2005, no event was observed in the signal region. Based on the assumption of $K_L^0 \rightarrow 3\gamma$ proceeding via parity-violation, we obtained the single event sensitivity to be $(3.23\pm0.14)\times10^{-8}$, and set an upper limit on the branching ratio to be $7.4\times10^{-8}$ at the 90% confidence level. This is a factor of 3.2 improvement compared to the previous results. The results of $K_L^0 \rightarrow 3\gamma$ proceeding via parity-conservation were also presented in this paper.
• ### High Purity Pion Beam at TRIUMF(1001.3121)
An extension of the TRIUMF M13 low-energy pion channel designed to suppress positrons based on an energy-loss technique is described. A source of beam channel momentum calibration from the decay pi+ --> e+ nu is also described.
• ### Search for a light pseudoscalar particle in the decay $K^0_L \to \pi^0 \pi^0 X$(0810.4222)
Feb. 6, 2009 hep-ex
We performed a search for a light pseudoscalar particle $X$ in the decay $K_L^0->pi0pi0X$, $X->\gamma\gamma$ with the E391a detector at KEK. Such a particle with a mass of 214.3 MeV/$c^2$ was suggested by the HyperCP experiment. We found no evidence for $X$ and set an upper limit on the product branching ratio for $K_L^0->pi0pi0X$, $X->\gamma\gamma$ of $2.4 \times 10^{-7}$ at the 90% confidence level. Upper limits on the branching ratios in the mass region of $X$ from 194.3 to 219.3 MeV/$c^2$ are also presented.
• ### The WASA Detector Facility at CELSIUS(0803.2657)
March 18, 2008 nucl-ex
The WASA 4pi multidetector system, aimed at investigating light meson production in light ion collisions and eta meson rare decays at the CELSIUS storage ring in Uppsala is presented. A detailed description of the design, together with the anticipated and achieved performance parameters are given.
• ### Relativistic effects and two-body currents in $^{2}H(\vec{e},e^{\prime}p)n$ using out-of-plane detection(nucl-ex/0105006)
May 15, 2001 nucl-ex
Measurements of the ${^2}H(\vec{e},e^{\prime}p)n$ reaction were performed using an 800-MeV polarized electron beam at the MIT-Bates Linear Accelerator and with the out-of-plane magnetic spectrometers (OOPS). The longitudinal-transverse, $f_{LT}$ and $f_{LT}^{\prime}$, and the transverse-transverse, $f_{TT}$, interference responses at a missing momentum of 210 MeV/c were simultaneously extracted in the dip region at Q$^2$=0.15 (GeV/c)$^2$. On comparison to models of deuteron electrodisintegration, the data clearly reveal strong effects of relativity and final-state interactions, and the importance of the two-body meson-exchange currents and isobar configurations. We demonstrate that these effects can be disentangled and studied by extracting the interference response functions using the novel out-of-plane technique.
• ### Tensor Analyzing Powers for Quasi-Elastic Electron Scattering from Deuterium(nucl-ex/9809002)
Sept. 18, 1998 nucl-ex
We report on a first measurement of tensor analyzing powers in quasi-elastic electron-deuteron scattering at an average three-momentum transfer of 1.7 fm$^{-1}$. Data sensitive to the spin-dependent nucleon density in the deuteron were obtained for missing momenta up to 150 MeV/$c$ with a tensor polarized $^2$H target internal to an electron storage ring. The data are well described by a calculation that includes the effects of final-state interaction, meson-exchange and isobar currents, and leading-order relativistic contributions.
|
# R is the easiest language to speak badly
I am amazed by the number of comments I received on my recent blog entry about "by", "apply" and friends. I had started my post by pointing out that R is a language. Well indeed, I have come to the conclusion, that it is a language with lots of irregular expressions and dialects. It feels a bit like German or French where you have to learn and memorise the different articles. The Germans have three singular definite articles: der (male), die (female) and das (neutral), the French have two: le (male) and la (female). Of course there is no mapping between them, and how do you explain that a girl in German is neutral (das Mädchen), while manhood is female (die Männlichkeit)?
Back to R. As I found out, there are lots of different ways to calculate the means on subsets of data. I begin to wonder, why so many different interfaces and functions have been developed over the years, and also why I didn't use the aggregate function more often in the past?
Can we blame internet search engines? Why should I learn a programming language properly, when I can find approximate answers to my problem online. I may not end up with the best answer, but with something which will work after all: Don't know why, but it works.
And sometimes the help files can be more difficult to understand than the code in the examples. Hence, I end up playing around with the example code until it works, and only then I try to figure out how it works. That was my experience with reshape.
Maybe this is a bit harsh. It is always up to the individual to improve his language skills, but you can get drunk in a pub as well, by only being able to order beer. I think it was George Bernard Shaw, who said: "R is the easiest language to speak badly." No, actually he said: "English is the easiest language to speak badly." Maybe that explains the success of English and R?
Reading helps. More and more books have been published on R over the last years, and not only in English. But which should you pick? Xi'an's review on the Art of R Programming suggests that it might be a good start.
Back to aggregate. Has anyone noticed, that the formula interface of aggregate is different to summaryBy?
aggregate(cbind(Sepal.Width, Petal.Width) ~ Species, data=iris, FUN=mean) Species Sepal.Width Petal.Width 1 setosa 3.428 0.246 2 versicolor 2.770 1.326 3 virginica 2.974 2.026
versus
library(doBy) summaryBy(Sepal.Width + Petal.Width ~ Species, data=iris, FUN=mean) Species Sepal.Width.mean Petal.Width.mean 1 setosa 3.428 0.246 2 versicolor 2.770 1.326 3 virginica 2.974 2.026
And another slightly more complex example:
aggregate(cbind(ncases, ncontrols) ~ alcgp + tobgp, data = esoph, FUN=sum) summaryBy(ncases + ncontrols ~ alcgp + tobgp, data = esoph, FUN=sum)
1. It would be good if you pointed out that the summaryBy function is in the doBy package and not part of the normal stats library...
2. this is why plyr is so good - you can argue whether or not it's better, but it certainly gives you a consistent interface.
3. I don't know about R being the same as a spoken language - have you tried asking directions or ordering a meal in R? Regarding the explanation for what seem inconsistencies in the German language - 'Mädchen' is a diminuitive form of 'Maid' which has a feminine article as expected. All diminutive forms 'Häuschen', 'Hündchen' use the neutral article. You will also notice that all nouns ending in '-keit' use a femnine article irrespective of what adjective has been coupled with it. Otherwise, nice post. Thanks.
4. The history of this is that aggregate in the out-of-the-box R did not have a formula method. During that time the doBy add-on package (not part of the out-of-the-box R) and its summaryBy command were created. Later a formula method was added to aggregate in the out-of-the-box R.
5. Keep in mind that there are thousands of contributed R-packages submitted by different authors. Since there's no team of editors poring over submissions (think Apple iPAD app approval process), these packages will not have perfectly consistent I/O formats. This really isn't different from, say, the MatLab online contributors' directory.
6. Good point. I have added the library statement to the post.
7. Indeed, and sometimes it is both a blessing and a curse.
8. Hello All.
Software Developer's Journal published new issue fully dedicated to R language. You can read the teaser now: http://sdjournal.org/data-development-gems-software-developers-journal-teaser/
9. Andrej-Nikolai Spiess8 March 2014 at 15:32
You're speaking out of my heart, to do some direct german translation ;-)
Same with me: I deliberately used tapply(data, factor, mean) until somebody mentioned the 'ave' function which also seems to be living in oblivion...
Cheers,
Andrej
10. Of course, it easier when you are writing in the language that you grew up speaking. While many of us who may read this post grew up speaking English
English for Literature
11. I think everyone wants to ensure that they present their best work at all times. This is so whether someone is writing in English or another language. Of course, it easier when you are writing in the language that you grew up speaking.
TESOL training
|
How do you simplify (343 u^4 c^-5) /(7 u^6 c^-3)^-5?
Aug 8, 2018
$\frac{5764801 {u}^{34}}{{c}^{20}}$
Explanation:
There is a negative exponent rule, I'm not quite sure if it has a name, but it says that a negative exponent in the numerator can be moved to the denominator and become positive, and vice versa.
An example would be ${x}^{-} 2 = \frac{1}{x} ^ 2$
So using this
$\frac{{\left(7 {u}^{6} {c}^{-} 3\right)}^{5} \left(343 {u}^{4}\right)}{c} ^ 5$
Then we can distribute the exponent, $5$, in the numerator
$\frac{16807 {u}^{30} {c}^{-} 15 \left(343 {u}^{4}\right)}{c} ^ 5$
Now we can move the ${c}^{-} 15$ to the denominator using the negative exponent rule
$\frac{16807 {u}^{30} \left(343 {u}^{4}\right)}{{c}^{5} \cdot {c}^{15}}$
We can now combine like bases
$\frac{5764801 {u}^{34}}{{c}^{20}}$
Aug 9, 2018
${\left({x}^{a}\right)}^{b} = {x}^{a \times b}$
$\frac{343 {u}^{4} {c}^{-} 5}{7 {u}^{6} {c}^{-} 3} ^ - 5 = \frac{{7}^{3} {u}^{4} {c}^{-} 5}{{7}^{-} 5 {u}^{-} 30 {c}^{15}}$
${x}^{a} / {x}^{b} = {x}^{a - b}$
${7}^{8} {u}^{34} {c}^{-} 20$
or $\frac{{7}^{8} {u}^{34}}{c} ^ 20$
Aug 9, 2018
$\frac{{7}^{8} {u}^{34}}{{c}^{20}}$
Explanation:
$\frac{343 {u}^{4} {c}^{-} 5}{7 {u}^{6} {c}^{-} 3} ^ - 5$
Use the law of indices for negative indices:
${x}^{-} m = \frac{1}{x} ^ m$
$= \frac{343 {u}^{4} \times {\left(7 {u}^{6} {c}^{-} 3\right)}^{5}}{c} ^ 5$
Note that $343 = {7}^{3}$
It is better to keep the numbers in index form.
$= \frac{{7}^{3} {u}^{4} \times {7}^{5} {u}^{30} {c}^{-} 15}{c} ^ 5$
$= \frac{{7}^{3} {u}^{4} \times {7}^{5} {u}^{30}}{{c}^{5} \times {c}^{15}}$
Add the indices of like bases:
$= \frac{{7}^{8} {u}^{34}}{{c}^{20}}$
|
+0
# HELP!!!
0
115
1
Please help with this equation my 4th grade math teacher is giving extension and i know quadratics and other stuff but i don't know how to solve this properly The real numbers $x$ and $y$ satisfy \begin{align*} x + y &= 4, \\ x^2 + y^2 &= 22, \\ x^4 &= y^4 - 176 \sqrt{7}. \end{align*}Compute $x - y.$
Mar 5, 2020
#1
+25267
+1
and i know quadratics and other stuff but i don't know how to solve this properly
The real numbers $$x$$ and $$y$$ satisfy
\begin{align*} x + y &= 4, \\ x^2 + y^2 &= 22, \\ x^4 &= y^4 - 176 \sqrt{7}. \end{align*}
Compute $$x - y$$.
$$\begin{array}{|rcll|} \hline x^4 &=& y^4 - 176 \sqrt{7} \quad | \quad \times(-1) \\ -x^4 &=& -y^4 + 176 \sqrt{7} \quad | \quad +y^4 \\ y^4-x^4 &=& 176 \sqrt{7} \\ && \boxed{\text{Formula }~a^2-b^2 =(a-b)(a+b) }\\ (y^2-x^2)(y^2+x^2) &=& 176 \sqrt{7} \quad | \quad y^2-x^2=(y-x)(y+x) \\ (y-x)(y+x)(y^2+x^2) &=& 176 \sqrt{7} \quad | \quad y+x = 4 \\ 4(y-x)(y^2+x^2) &=& 176 \sqrt{7} \quad | \quad y^2+x^2 = 22 \\ 4*22(y-x)&=& 176 \sqrt{7} \\ 88(y-x)&=& 176 \sqrt{7} \quad | \quad : 88 \\ y-x &=& 2 \sqrt{7} \quad | \quad \times(-1) \\ \mathbf{ x-y } &=& \mathbf{-2 \sqrt{7}} \\ \hline \end{array}$$
Mar 5, 2020
|
# Program to find minimum costs needed to fill fruits in optimized way in Python
PythonServer Side ProgrammingProgramming
Suppose we have a list called fruits and another two values k and cap. Where each fruits[i] has three values: [c, s, t], this indicates fruit i costs c each, size of each of them is s, and there is total t of them. The k represents number of fruit baskets of capacity cap. We want to fill the fruit baskets with the following constraints in this order −
• Each basket can only hold same type fruits
• Each basket should be as full as possible
• Each basket should be as cheap as possible
So we have to find the minimum cost required to fill as many baskets as possible.
So, if the input is like fruits = [[5, 2, 3],[6, 3, 2],[2, 3, 2]] k = 2 cap = 4, then the output will be 12, because we can take two fruit 0s because with these two, we can make the first basket full for total size 2+2=4, it costs 5+5=10. Then, we use one of fruit 2 because it is cheaper. This costs 2 unit.
To solve this, we will follow these steps −
• options := a new list
• for each triplet (c, s, t) in fruits, do
• while t > 0, do
• fnum := minimum of floor of (cap / s) and t
• if fnum is same as 0, then
• come out from loop
• bnum := floor of t / fnum
• insert triplet (cap - fnum * s, fnum * c, bnum) at the end of options
• t := t - bnum * fnum
• ans := 0
• for each triplet (left_cap, bcost, bnum) in the sorted list of options, do
• bfill := minimum of k and bnum
• ans := ans + bcost * bfill
• k := k - bfill
• if k is same as 0, then
• come out from loop
• return ans
## Example
Let us see the following implementation to get better understanding −
def solve(fruits, k, cap):
options = []
for c, s, t in fruits:
while t > 0:
fnum = min(cap // s, t)
if fnum == 0:
break
bnum = t // fnum
options.append((cap - fnum * s, fnum * c, bnum))
t -= bnum * fnum
ans = 0
for left_cap, bcost, bnum in sorted(options):
bfill = min(k, bnum)
ans += bcost * bfill
k -= bfill
if k == 0:
break
return ans
fruits = [[5, 2, 3],[6, 3, 2],[2, 3, 2]]
k = 2
cap = 4
print(solve(fruits, k, cap))
## Input
[[5, 2, 3],[6, 3, 2],[2, 3, 2]], 2, 4
## Output
12
Updated on 18-Oct-2021 13:13:11
|
# FORCE cubes: orange and yellow magnetic 7x7s (Aofu GTS M), magnetic megaminxes (Galaxy V2 M), + misc stuff
#### Doctor Hedron
##### Member
I've been making some force cubes, and I'm selling the other colors that I don't need.
Paypal hugely preferred. Airmail shipping with tracking number is $8 for < 250 g (usually one puzzle), and$12 for 250-500 g (two puzzles). It takes ~2-2.5 weeks around the world, but it depends on your postal service too, of course.
I'm not very active on this forum, but I'm a long-time member of /r/cubers on reddit and of TwistyPuzzles forum. If you want, you can contact me there instead. Here are some public "testimonials" / endorsements from earlier buyers: one, two, three, four, five, six, seven, eight
---------------
FORCE MAGNETIC 7x7: MOYU AOFU GTS M. It also comes with a bag of extra pieces of the same color, a "7th side". It will fit into a $12 parcel, but not$8.
Colors left available: orange for $65 and yellow for$55. They were all $65, but I realize that yellow is less popular. For reference: my orange 8x8 with stickers, my yellow 9x9 with stickers. Recognition is surprisingly great on both. Orange is identical between 7x7 and 8x8. Yellow is a bit more saturated on the 7x7. --------------- CHEAP MISC STUFF (same photo above): Moyu Axis Cube stickerless x2 (never used, failed force project b/c of how pieces are made), for$3. And one blue FORCE REDI BARREL, for $6. It's more of a novelty fidgety thing, but if you really need stickers, I can point you to a custom sticker maker who made a template from my scans. Any one of these small things can fit into the same parcel as the 7x7 with no extra$ in shipping (it will still be under 500 g).
---------------
FORCE GALAXY V2 M, concave. Here's a dedicated post in /r/cubers about them. Mind you, these come with four extra sides worth of spare parts (a lifetime supply!), because of a specific kink in how edge pieces are made. It's unavoidable with this specific megaminx. Here's a video with a detailed explanation.
Older photos of all colors together: Image 1, Image 2
Price: $35. Used to be 40 (this is the MSRP price of the regular one ($30) + extra sides), but I've discounted these last ones.
Colors available: light blue, pink, yellow, cream / pale light yellow (looks kinda like primary when stickered), and white for $30 (factory magnetic white doesn't exist). Weight: ~248 g (fits into one$8 parcel if you let me remove one or two items of decorative crap from the box).
Also, one REGULAR stickerless Galaxy V2 M, concave for $16, which is slightly below wholesale price. Best offer you'll get on this, even with shipping. Unused, scrambled/solved exactly once for a photo for a local Craigslist. This is a leftover from the force project (TLDR - you can't predict in advance how many exactly you need because of how Galaxy V2 edges are constructed, see video above). ---------------- FORCE YJ Yusu R 4x4 - a budget 4x4. Only green color left.$6. 150 g. I also have one set of stickers from SCS, if you want ($2). Last edited: #### Doctor Hedron ##### Member The 4x4x5 cuboid and the sculpted regular megaminxes are sold. One *concave* regular mega remaining (for$16 + shipping, see above), as well as the force ones.
If you want any of the above things before Christmas, either for yourself or as a gift to someone, there is still time, btw. Shipping to US takes about 2-2.5 weeks, in past experience.
#### deeplz
##### Member
Do you still have the concave V2 M?
#### Doctor Hedron
##### Member
The 7x7 (this is Aofu GTS M) is still unstickered, because:
* I'm waiting for a local store to have it in stock any day now, so that I can hopefully swap a couple corner stalks with them and have the 7x7s be *fully* consistent in color inside
* SCS doesn't have stickers for it yet :/ But hopefully that changes soon.
And regarding megaminxes, I made round 2 a while back and I still have these leftover colors that I'm selling in the forum's marketplace area. Anyone wants that pink?
#### Doctor Hedron
##### Member
Whoops - I didn't realize that the moderator moved the post here (I originally posted this as a "May the force be with you" standalone pic).
I haven't updated that first post in a while; I'll get on fixing it now.
Regarding the 4x4 (Yusu R), I only have the green one left (candy green), and extra stickers if you need 'em.
#### Twifty
##### Member
What happened to the MF8? I’m interested.
#### Doctor Hedron
##### Member
What happened to the MF8? I’m interested.
Oh, yeah, I edited the body of the post but forgot to edit the title, sorry. I'll fix it now for real.
That remaining MF8 was sold since then. :/
However, I have an unclaimed orange Aofu GTS M 7x7 (with spare orange parts on the side) for $65 + shipping. Freshly made, just finalized them a few days ago. The orange plastic is the same as on the orange 8x8 seen above. It's not too screaming and my recognition doesn't suffer (black stickers on the orange side). Further plans: * Aochuang GTS M 5x5 - very soon, ETA mid-June or so (green will be mine, the others are available) * Probably Aosu GTS2 M and Aoshi GTS M at some point in the future * Maybe I'll tackle the upcoming Meilong 12x12, but for that, I'll need to have buyers for other colors lined up in advance, have them have serious intentions (maybe ~30% upfront as proof, in exchange for detailed updates w/ photos), *and* I'll need to make the 5x5s before that, to have an extra data point to extrapolate how many hours the 12x12 will take. Making 9x9s is likely child's play compared to 12x12s, so I need to be smart about it. Last edited: #### Twifty ##### Member Is the sculpted V2 M still in stock? #### Doctor Hedron ##### Member I cleaned up the first post with better, concise formatting and a new picture (as well as double checking what I have left). Summary: selling orange and yellow force magnetic 7x7s from this set: Also a couple Moyu Axis cubes, never used (from a failed force project idea), and a blue force Redi Barrel. All of these in one photo: The offer on the megaminxes still stands. #### Doctor Hedron ##### Member Who wants a force 5x5 M? These are Moyu Aochuang GTS M.$40 for red, orange and blue, $35 for yellow. Shipping is$8 without the extra Moyu stuff (Team Moyu cards and 5x5 instruction pamphlet), or $12 with them (it's based on weight and it's cutting it extremely close to the weight cutoff). Even with shipping, it's cheaper than a Valk 5 M in retail! Valk 5 is hyped a lot on social media by Qiyi these days, but Aochuang is also capable of achieving that 45 degree corner cutting, and is a tried and tested flagship magnetic 5x5. I also have two sets of "half-bright + black" stickers fitted for this cube (from SCS) for$2 (cheaper than on SCS!), but together with the cube they won't fit into the $8 parcel - it will be a few grams too much. But if you want to grab something else together with the 5x5, then shipping is$12 for two puzzles under 500 g and stickers will fit there.
And this is what my green one looks like. Examples of how stickered orange and yellow look are a few posts above (8x8 and 9x9).
Want to hide this ad and support the community?
|
# nops_scan
0th
Percentile
##### Read Scanned NOPS Exams
Read scanned NOPS exams produced with exams2nops.
Keywords
utilities
##### Usage
nops_scan( images = dir(pattern = "\\.PNG$|\\.png$|\\.PDF|\\.pdf\$"), file = NULL, dir = ".", verbose = TRUE, rotate = FALSE, cores = NULL, n = NULL, density = 300, size = 0.029, threshold = c(0.04, 0.42), minrot = 0.002, string = FALSE)
##### Arguments
images
character. Names of the PDF/PNG images containing the scanned exams. By default all PDF/PNG images in the current working directory are used.
file
character or logical. Optional file name for the output ZIP archive containing the PNG images and the scan results. If file = FALSE no ZIP archive is created. By default a suitable name using the current time/date is used.
dir
character. Directory in which the ZIP file should be created. By default the current working directory.
verbose
logical. Should progress information be displayed?
rotate
logical. Should the input PDF/PNG images be rotated by 180 degrees first?
cores
numeric. If set to an integer mclapply is called internally using the desired number of cores to read the scanned exams in parallel.
n
numeric. The number of answer fields to read (in multiples of 5), i.e., 5, 10, ..., 45. By default taken from the type field.
density
numeric. Resolution used in the conversion of PDF images to PNG. This requires ImageMagick's convert to be available on the system.
size
numeric. Size of the boxes containing the check marks relative to the image height. This can be tweaked somewhat but should typically be between 0.23 and 0.31.
threshold
numeric. Vector of thresholds for the gray levels in the check mark boxes. If the average gray level is between the gray levels, the box is checked. If it is above the second threshold, some heuristic is employed for judging whether the box contains a cross or not.
minrot
numeric. Minimum angle for rotating images, i.e., images with a lower angle are considered to be ok.
string
logical. Are the files to be scanned manually marked string exercises (rather than single/multiple choice exercises)?
##### Details
nops_scan is a companion function for exams2nops. Exams generated with exams2nops can be printed and the filled out answer page can be scanned. Then, nops_scan can be employed to read the information in the scanned PDF/PNG images. The results are one text line per image containing the information in a very simple space-separated format. If images only contains PNG files, then the R function readPNG is sufficient for reading the images into R. If images contains PDF files, these need to be converted to PNG first which requires PDFTk, GhostScript, and ImageMagick's convert to be available on the system. On Linux(-esque) systems this is typically easy to install by pdftk and imagemagick. The download links for Windows are: http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/pdftk_free-2.02-win-setup.exe, http://www.imagemagick.org/script/binary-releases.php#windows, http://www.ghostscript.com/download/gsdnld.html. Practical recommendations: The scanned images produced by scanners or copying machines typically become smaller in size if the images are read in just black/white (or grayscale). This may sometimes even improve the reliability of reading the images afterwards. The printed exams are often stapled in the top left corner which has to be unhinged somehow by the exam participants. Although this may damage the exam sheet, this is usually no problem for scanning it. However, the copying machine's sheet feeder may work better if the sheets are turned upside down (so that the damaged corner is not fed first into the machine). This often improves the scanning results considerably and can be accomodated by setting rotate = TRUE in nops_scan.
##### Value
A character vector with one element per scanned file (returned invisily if written to an output ZIP archive). The output contains the following space-separated information: file name, sheet ID (11 digits), scrambling (2 digits), type of sheet (3 digits, number of questions rounded up to steps of 5), 0/1 indicator whether the replacement sheet was used, registration number (7 digits), 45 multiple choice answers of length 5 (all 00000 if unused).
exams2nops, nops_eval
• nops_scan
##### Examples
## scanned example images stored in exams package
img <- dir(system.file("nops", package = "exams"), pattern = "nops_scan",
full.names = TRUE)
|
# Power set
The power set of a set $X$ is denoted by $\mathcal{P}(X)$, sometimes $2^X$ (a number to the power of a set is not defined (as it cannot be usefully defined) leaving it free to be used as notation. This comes from the cardinality of the power set being $2^{|X|}$)
The characteristic property of the power set is that $\forall U\subset X:U\in\mathcal{P}(X)$
It is the set of all subsets of $X$
|
# Laplace Transformation Question.
• Nov 11th 2009, 11:35 AM
Phyxius117
Laplace Transformation Question.
Not sure how to start this question -.-
Find the Laplace transform of the function:
f(t)=te^(2t)cos(2t)
http://webwork.geneseo.edu/webwork2_...144/char4C.pnghttp://webwork.geneseo.edu/webwork2_...144/char66.pngf(t)http://webwork.geneseo.edu/webwork2_...144/char67.png= ?
Thanks for the help!!
• Nov 11th 2009, 12:17 PM
pickslides
Hi there, been a while since i've done one of these, the definition is
$\mathcal{L}\left\{f(t)\right\}=\int_0^\infty e^{-st}f(t)~dt
$
Therefore
$\mathcal{L}\left\{te^{2t}cos(2t)
\right\}=\int_0^\infty e^{-st}te^{2t}\cos(2t)~dt=\int_0^\infty e^{t(2-s)}t\cos(2t)~dt
$
Also by the shift theorem
$\mathcal{L}\left\{e^{2t}\cos(2t)\right\}=\frac{s-2}{(s-2)^2+4}$
• Nov 11th 2009, 12:20 PM
Phyxius117
Ohhh I see now
I can use the the Theorem of Differentiation of transforms after that shift transformation.
Ima give it a try now!!
• Nov 11th 2009, 12:27 PM
Phyxius117
Thanks for the help!! I got the correct answer!!
It's
(s*(s-4))/((s^2-4s+8)^2)
• Nov 11th 2009, 12:29 PM
pickslides
Do you mean?
$\mathcal{L}\left\{t\times f(t)\right\}=-F'(s)
$
• Nov 11th 2009, 12:30 PM
pickslides
Quote:
Originally Posted by Phyxius117
|
# Tag chemical equations with letters and numbers
Just started in the writing of chemistry stuff using LaTeX.
Question
Is it possible to tag chemical equations with a letter and numbers as R1, R2 and so on?
Output and MWE
\documentclass[12pt,twoside]{report}
\usepackage[spanish,es-noquoting]{babel}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{amssymb}
\usepackage{amsfonts}
\usepackage{nccmath}
\usepackage{chemformula}
\begin{document}
\begin{align}
&\ch{B + H+ <=> BH+}\\
&\ch{AH+ + B <=> A + BH^{+}} \label{reaccion_neta}
\end{align}
The sum \ref{reaccion_neta} is:
\begin{align}
\Delta G^0_{1} + \Delta G^0_{ref}&=-RT(ln(Ka_1)+ln(Ka^{-1}_{ref}) \\
G^0_{BH^+} - G^0_{B} + G^0_{A}- G^0_{AH+} &=-RT(ln(Ka_1)+ln(Ka^{-1}_{ref})\\
&= RT2.303pKa_{1}- RT2.303pKa_{ref}
\end{align}
\end{document}
• We appreciate it if there is only one question per post. That allows us to keep focussed. – Masroor Aug 31 '17 at 17:17
• There are various packages for typesetting chemistry related stuff. You could have a look at mhchem or chemmacros. Both offer the possibility to align and number reaction equations. This answer might be interesting as well. – leandriis Aug 31 '17 at 17:23
• Unrelated: use \ln not ln for the natural logaritm, and you'd probably want _{\mathrm{ref}} instead of _{ref}. – Torbjørn T. Aug 31 '17 at 17:34
• How would you like the "letter numbering" to be relative to the "equation numbering"? (A) (B) (1) (2) (3)...? – Werner Aug 31 '17 at 17:41
• @Hernan Miraola As already commented before, there already is an answer to this part of your question here: tex.stackexchange.com/a/147854/134144 Ludovic C. defined a new environment for numbering reaction equations with R1... while keeping the counter of math equations untouched. – leandriis Aug 31 '17 at 19:05
Here you are. I define an environment, somewhat like subequations, which re-defines the equation counter as a new chemeqn counter. Also, I propose another alignment (second group) based on alignat, which I find nicer:
\documentclass[12pt,twoside]{report}
\usepackage[spanish,es-noquoting]{babel}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{amssymb}
\usepackage{amsfonts}
\usepackage{mathtools, nccmath}
\usepackage{chemformula}
\makeatletter
\newcounter{chemeqn}
\newenvironment{chemequations}{\let\c@equation\c@chemeqn\def\theequation{R\thechemeqn}}{}
\makeatother
\begin{document}
\begin{chemequations}
\begin{align}
&\ch{B + H+<=> BH+}\\
&\ch{AH+ + B<=> A + BH^{+}} \label{reaccion_neta}
\end{align}
\begin{alignat}{2}
\ch{B &+ H+ & & <=> BH+}\\
\ch{AH+ &+ B & & <=> A + BH^{+}} \label{reaccion_neta}
\end{alignat}
\end{chemequations}
The sum \eqref{reaccion_neta} is:
\begin{align}
\Delta G^0_{1} + \Delta G^0_\textrm{ref}&=-RT(\ln(Ka_1)+\ln(Ka^{-1}_\textrm{ref}) \\
G^0_{BH^+} - G^0_{B} + G^0_{A}- G^0_{AH+} &=-RT(\ln(Ka_1)+\ln(Ka^{-1}_\textrm{ref})\\
&= RT2.303pKa_{1}- RT2.303pKa_\textrm{ref}
\end{align}
\end{document}
|
+0
# What is the value of a + b?
0
351
1
+78
Let a and b real numbers such that x4+2x3-x2+ax+b = (Q(x))2 for some polynomial Q(x). What is the value of a + b?
Oct 16, 2018
#1
+6193
+2
$$p(x)=x^4+2x^3-x^2+ax+b = (q(x))^2,~\text{for some polynomial }q(x)$$
$$\text{we know q(x) will be of order 2 so write it as }\\ q(x)=q_2 x^2 + q_1 x + q_0 \\ (q(x))^2 = q_2^2 x^4+2 q_1 q_2 x^3+\left(q_1^2+2 q_0 q_2\right) x^2+2 q_0 q_1 x+q_0^2$$
$$\text{and we have equations }\\ q_2^2 = 1\\ 2q_1q_2 = 2\\ (q_1^2+2q_0q_2)=-1\\ 2q_0q_1=a\\ q_0^2=b$$
$$\text{clearly }q_2 = \pm 1\\ \text{suppose }q_2=1 \\ 2q_1 (1)=2,~q_1=1\\ (1+2q_0(1))=-1 \\ q_0=-1 \\ b=q_0^2 = 1 \\ a=2q_0 = 2$$
$$\text{now suppose }q_2=-1 \\ 2q_1(-1) = 2,~q_1=-1 \\ (1+2q_0(-1))=-1,~q_0=1 \\ b=q_0^2 = 1 \\ a = 2(-1)(-1) = 2$$
$$\text{so in both cases }a=2,~b=1 \\ \text{and }a+b = 3$$
I kind of suspect there is a simpler way to solve this.
.
Oct 17, 2018
edited by Rom Oct 17, 2018
|
Question
# A sphere and a cone have the same volume, and each has a radius of 6 centimeters. What is the height of the cone?
Solid Geometry
A sphere and a cone have the same volume, and each has a radius of 6 centimeters. What is the height of the cone?
Equate the volumes of the sphere and the cone: Vsphere=Vcone $$\displaystyle\frac{{4}}{{3}}π{r}^{{3}}=\frac{{1}}{{3}}π{r}^{{3}}$$
$$\displaystyle\frac{{4}}{{3}}π{\left({6}\right)}^{{3}}=\frac{{1}}{{3}}π{\left({6}\right)}^{{2}}{h}$$
$$\displaystyle{288}π={12}π{h}$$
|
How I spent two weeks hunting a memory leak in Ruby
(This post was translated, the original version is in my Russian blog.)
Foreword
This is a story about hunting a memory leak. A long story, because I go into much detail.
Why describe my adventures? Not that I wanted to save all those tiny code pieces and scripts only. It rather occurred to me that it was UNIX way which I had pursued. Every step was related to yet another small utility, or a library, which solves its task well. And finally I succeeded.
Also it was an interesting journey! Sometimes I got in and out of bed with thoughts about what happens to memory.
I am grateful to my colleagues in Shuttlerock company which actually worked (as in “being productive for customers”), while I was playing memory detective. Grats to the company itself too!
Introduction
So we found ourselves in an unpleasant situation: our Rails app on Heroku platform had a memory leak. One can see it on this graph:
Now, every deploy or restart makes memory line go up fast (see around 3 AM, 8 AM). That’s totally normal, because Ruby is a dynamic language. Some requires are still being fired, something loads up. When everything is loaded, memory consumption should be quasi-constant. Requests are coming, objects get created, the line goes up. Garbage collector does its job – it goes down. The app should be able to stay forever in such a dynamic balance.
But in our case there’s an apparent upward trend. A leak! When our Heroku 2X dyno (we use puma in cluster mode with 2 worker processes) reaches 1 GB, it becomes slow as hell, unresponsive and needs to be restarted.
Reproduce the thing locally
Thou shalt not make experiments on the production server. First thing I did was downloading latest production database dump and running the app in production environment on my local Linux server which resides in the closet (you should have one too!). Our app is a SaaS product and has a special middleware which detects customer sites by domain, therefore I had to introduce small changes to app code enabling me to make requests like so: curl http://localhost:3000/…. Environment variables are best for this purpose.
As you can see, if environment variable QUERY_SITE_ID is set, site ID gets taken from query parameter site_id:
curl http://localhost:3000/?site_id=123
In most cases you’ll have to specify config.force_ssl = false in config/environments/production.rb, set DEVISE_SECRET_KEY variable (if you use devise), maybe something else. The curl command must finally work.
So, server works locally, what’s next? You have to supply incoming requests. The wonderful siege utility allows to put load on web servers in different modes and record statistics.
I decided to not hit a single route, but rather collect real URLs used by web clients. Easy-peasy: run heroku logs -t | tee log/production.log for some time, then extract URLs from the log. I wrote a small utility which parsed log, collected paths, site_id values reported by our middleware and saved everything into urls.txt like this:
You can create such a file by hand, or resort to awk, grep, sed.
Let’s run siege:
siege -v -c 15 --log=/tmp/siege.log -f urls.txt
Here siege will create 15 parallel clients and use urls.txt as its input.
If you do everything right, you’ll experience the memory leak. It can be seen with top and ps utilities — the number to look for is called RSS (Resident Set Size). To save myself from running them, though, I’ve added the following code to the app:
Accurate records with increasing RSS started to appear in the log. More on GC.stat[:heap_live_slots] later.
After first experiments I swiched puma into single mode, because it leaked too and single process is easier to deal with.
Hunting leak in Ruby code
Being certain in leak existence, I started playing the Sherlock Holmes part.
Some words need to be said about how MRI works with memory in general. Objects are stored in a heap controlled by the interpreter. The heap consists of separate pages, each being 16 Kb in size and every object using 40 bytes of memory. When object is created, MRI searches for a free slot: if there’s none, extra page is added to the heap. Now, not every object fits into 40 bytes. If it needs more, additional memory is allocated (via malloc).
Memory is freed automatically when garbage collector (GC) runs. Modern MRIs have quite effective incremental GC with generations and two phases: minor and major. Based on heuristic principle “Most objects die young”, minor GC tries to find unneeded objects only among newly created ones. This allows to run major GC less often, and this guy performs classic Mark-and-Sweep algorithm for ALL objects.
It must be noted that intricacies of different generations have nothing to do with hunting memory leaks. There’s one important thing: are all objects freed, which are created while processing web request, or not? Actually, in web server context all objects can be divided into three groups:
1. Statics. All loaded gems, esp. Rails, and app code. All this gets loaded once in production environment and doesn’t change.
2. Slow dynamics. There’s a certain amount of long-lived objects, e.g. cache of prepared SQL statements in ActiveRecord. This cache is per DB connection and is max. 1000 statements by default. It will grow slowly, and total number of objects will increase until cache reaches full size (2000 strings * number of DB connections)
3. Fast dynamics. Objects created during request processing and response generation. When response is ready, all these objects can be freed.
In third case, if any object doesn’t get freed, you’ll have a leak. Here’s an illustration:
Constants are not garbage-collected, therefore consecutive MyController#index calls will lead to FOO inflation. Heap will grow to accomodate more and more "haha" strings.
If there are no leaks, heap size will oscillate. The minimum size corresponds to objects from groups 1 and 2 (see above). For example, in our app this size is slightly above 500,000, while an empty app created with rails new app instantiates roughly 300,000 objects. Maximum heap size depends on how often major GC runs. But: after every major GC the number of objects is always back to the lowest. Leaks will lead to low boundary increased over time. The GC.stat[:heap_live_slots] figure reflects current heap size.
The most convenient way to explore GC-related things is using gc_tracer gem, built by Koichi Sasada, the Ruby developer team member and the author of incremental GC implementation in Ruby 2.1 and 2.2. Having added
to config/application.rb, we get log/gc.log file which is filled by GC stats and also getrusage results (the latter is useful because one of fields contains RSS figure which we are so interested in!).
Every line of this log has over 50 numbers, but here’s some simple UNIX magic to the rescue. The command I ran along with puma and siege was:
tail -f log/gc.log | cut -f 2,4,7,8,9,15,16,19,21,22,25,26,27,28,29,30,36
which gives this:
First number in line is timestamp in milliseconds. Second is the number of pages. Third – heap size (in objects). The 11th (with values in 581634…613199 range) is the number of “old” objects, i.e. objects which are not inspected during minor GC run. The last number in line is RSS in kilobytes.
Still so many numbers! Let’s plot them. We could load this log directly into Excel (sorry, LibreOffice Calc), but that’s not classy. Let’s use gnuplot instead, which is able to plot directly from files.
Unfortunately, gnuplot doesn’t support timestamps in milliseconds, so I had to write a small script for timestamp conversion:
Along with turning milliseconds into seconds some extra information is discarded here. gc_tracer generates data on all stages of GC but we are interested only in the final data (first column contains “end_sweep”).
This gnuplot script
gets us this picture:
Red curve (left scale) displays the number of “old” objects. It behaves like there’s no leak. Blue curve (right scale) is RSS, which never stops rising.
The conclusion is: there are no leaks in Ruby code. But I wasn’t keen enough to grasp that at the moment and spent more time pursuing a false path.
False path: messing with heap dumps
Modern versions of MRI are equipped with powerful means for memory analysis. For instance, you can enable object creation tracing, and for each newly created object the location where it was instantiated (source file, line) will be saved and accessible later:
Now we can dump the heap into a file:
Location information will be dumped too, if it’s available. The dump itself is a JSON Lines file: data for each object are in JSON format and occupy one line in the file.
There are gems which make use of allocation tracing, like memory_profiler and derailed. I’ve decided to investigate what happens with the heap in our app as well. Having had no luck with memory_profiler, I went for generating and analyzing dumps myself.
First I wrote my own benchmark like derailed does, but later switched to generating dumps from live app, using rbtrace. If you have gem 'rbtrace' in your Gemfile, here’s a way to generate dump:
Now let’s assume we have three dumps (1, 2, 3), generated at different moments in time. How can we spot a leak? The following scheme has been suggested: let’s take dump 2 and remove all objects which are present in dump 1. Then let’s also remove all objects which are missing from dump 3. What’s left is objects which were possibly leaked during the time between dump 1 and 2 creation.
I even wrote my own utility for this differential analysis procedure. It’s implemented in… Clojure, because I like Clojure.
But everything I could find was the prepared SQL statements cache mentioned above. Its contents, SQL strings, weren’t leaked, they just lived long enough to appear in dump 3.
Finally I had to admit that there are no leaks in Ruby code and had to look for them elsewhere.
Learning jemalloc
I made following hypothesis: if there are no leaks on Ruby side, but memory is somehow still leaking, there must be leaks in C code. It could be either some gem’s native code or MRI itself. In this case C heap must be growing.
But how to detect leaks in C code? I tried valgrind on Linux and leaks on OS X. They haven’t got me anything interesting enough. Still looking for something, I’ve stumbled upon jemalloc.
Jemalloc is a custom implementation of malloc, free, and realloc that is trying to be more effective than standard system implementation (not to count FreeBSD, where jemalloc is the system implementation). It uses a bag full of tricks to achieve this goal. There’s an own page system with pages allocated via mmap; the allocator uses independent “arenas” with thread affinity allowing for less synchronization between threads. Allocated blocks are divided into three size classes: small (< 3584 bytes), large (< 4 Mb), and huge – each class is handled differently. But, most importantly, jemalloc has statistics and profiling. Finally, MRI 2.2.0 supports jemalloc natively! The LD_PRELOAD hack is not needed anymore (by the way, I couldn’t make it work).
I quickly went into installing jemalloc and then MRI with jemalloc enabled. Not without trouble. Ubuntu and Homebrew-provided jemalloc library is built without profiling. Also you can’t build certain Ruby gems with most recent jemalloc version 4.0.0, released in August 2015. E.g. pg gem doesn’t like C99 <stdbool.h> included into <jemalloc/jemalloc.h>. But everything is smooth with jemalloc 3.6.0.
You can follow this instruction to build MRI with rbenv, however at the end I opted for building it myself:
You’ll also need --with-openssl-dir=/usr/local/opt/openssl for Homebrew.
This Ruby works as normal. But it reacts to MALLOC_CONF environment variable:
When Ruby is done, you’ll get quite detailed memory allocation stats printed to stderr. I used this setting to run puma overnight (with siege attacking it). By morning RSS grew up to 2 gigs. Hitting Ctrl-C brought me to the stats with allocated memory figure being close to same 2 gigs. Eureka! The C code leak hypothesis has been confirmed!
Profiling and more detailed stats
Next question was: where in C code are those 2 gigs allocated? Profiling helped here. The jemalloc profiler stores addresses which called malloc, calculates how much memory has been allocated from each address, and stores everything to the dump. You can enable it with same MALLOC_CONF, specifying prof:true flag. In this case final dump will be generated when the process exits. Dump is to be analyzed with pprof program.
Unfortunately, pprof couldn’t decode addresses:
I had to subtract the start of code segment (this information is printed when the process exists) from these numbers and use info symbol command in gdb (e.g. info symbol 0x2b5f9). The address appeared to belong to objspace_xmalloc function (it’s declared static, maybe that’s the reason for non-showing). A more representative profile, with puma being hit by siege for 2 hours, showed that this function allocated 97.9 % of total memory allocated. Now, the leak has something to do with Ruby indeed!
Having become more certain in my search area, I’ve decided to investigate statistical patterns of allocated blocks. Feeling not inclined to parse jemalloc textual stats output, I wrote my own gem, jemal. The main functionality is in Jemal.stats method which returns all statistics of interest as one big hash.
What was left is adding a small piece of code into the app:
…and run puma and siege overnight again.
By morning log/jemalloc.log was big enough and could be analyzed. jq tool proved being extremely helpful. First I decided to see how memory grows:
Look how UNIX way works here! jq parses JSON in every line and outputs ts values and allocated values in turn:
Then paste - - joins its input line by line, separating two values with TAB:
Such a file can be fed to gnuplot:
Linear growth! What’s with block sizes?
jq can extract data even from deeply embedded structures.
The plot testifies that it’s small objects who’s being leaked. But that’s not all: jemalloc provides separate stats for different block sizes within one class! In small class, every allocated block has one of 28 fixed sizes (the requested size is simply rounded up): 8, 16, 32, 48, 64, 80, …, 256, 320, 384, …, 3584. Stats for every size are kept separately. By staring at the log I noted some anomaly with size 320. Let’s plot it as well:
Wow! Memory is consumed by objects of one size. Everything else is just constant, it’s evident from the fact that lines are parallel. But what’s up with size 320? Apart from total memory allocated, jemalloc calculates 8 more indices, including the number of allocations (basically, malloc calls) and deallocations. Let’s plot them too:
Same indices for blocks with neighbor size, 256, are included for comparison. It’s apparent than blue and orange curves blended, which means that the number of deallocations for 256-sized blocks is approx. equal to the number of allocations (that’s what we call healthy!). Compare this to size 320, where the number of allocations (magenta curve) runs away from the number of deallocations (green curve). Which finally proves the memory leak existence.
Where’s the leak?
We’ve squeezed everything we could from stats. Now the leak source is to be found.
I had no better idea than adding some debug prints to gc.c:
Not wanting to drown in information, I started to run curl by hand instead of using siege. I was interested in how many blocks get leaked during one request processing. Yet another middleware was injected to the app:
I’ve used initializer (not config/application.rb) here to ensure that the middleware is exactly at the top of middleware stack.
Having run curl a few times, I saw that M-D value increases by 80-90 every request. Allocations and deallocations also appeared on stderr (and in the log by virtue of tee). Cutting last portion of log between ------------------------- and MAM BEFORE ..., I ran it through this simple script:
Now here they are, the possible leak addresses:
There are quite many blocks with size 312. What does it mean? Nothing! I wanted to look at the memory contents and used gdb to connect to live process. So, taking some address and looking what’s there:
Looks like a bunch of pointers. Intel is little-endian, so most significant byte is at the end and first line represents 0x7f1139dee134 number (the 0x7f113.. thing made me believe it’s an address). Helpful? Not much.
Then I wanted to see the backtraces of calls which allocated those blocks. Lazy googling revealed the following code which works with gcc:
I’ve added the call of this function to objspace_xmalloc:
Then I repeated everything again, ran the possible leak detection script and started to look up the found addresses in the log, as backtraces were neatly printed near them.
And what I saw was…
More then a dozen addresses related to redcarpet.so showed up. Now that’s who ate all our memory!!! It was redcarpet gem, a Markdown to HTML renderer.
Verification and fix
It was easy after finding the offender. I ran 10000 renderings in the console – leak confirmed. Ditched Rails, loaded the gem separately and repeated the same – the leak is still there!
The only place where redcarpet’s native code allocated memory through Ruby interface is in rb_redcarpet_rbase_alloc function which is basically a C constructor for Redcarpet::Render::Base. The allocated memory wasn’t freed during garbage collection. Quick googling revealed an example of how to write such a constructor correctly in tenderlove’s blog. And the fix was simple.
Bingo!
Conclusion
1. Probably, it shouldn’t have taken 2 weeks. I’d spend much less now.
2. False paths are getting you out of the way, and you have to get back. Aside from dumps I’ve also spent some time trying to configure garbage collector with environment variables. The only thing I could take with me was RUBY_GC_HEAP_INIT_SLOTS=1000000. With this setting our app fits into the heap and doesn’t need more slots.
3. It seems that you can debug anything in our times. The number of helpful tools and libraries is incredible. If you don’t succeed, just try more.
P.S.
Redcarpet guys still did nothing about my pull request as of now (Sep 21, 2015), but we’ve deployed the fix to production long ago. Compare this leak-y picture (same as in the beginning of this post)
with normal memory consumption graph:
Here’s no trend at all. As it should be.
Update on 2015/09/29. Redcarpet fix has been released.
<< Older
|
# What is a soap, and how are they made?
##### 1 Answer
Feb 13, 2017
WE have incomplete information........
#### Explanation:
A soap is a (alkali metal) salt of a fatty acid, a long chain carboxylic acid. A typical soap is $\text{sodium stearate}$, ${H}_{3} C {\left(C {H}_{2}\right)}_{16} C {O}_{2}^{-} N {a}^{+}$. Depending on the metal salt, and the length and source of the soap chain, different soaps (and later detergents) can be manufactured. Soaps made from olive oils are particularly fine, and mild.
Of course, during their manufacture, colouring and perfumes may be added. Traditionally, these came from natural sources, i.e. flowers, or herbs, or something that smelled nice. For modern manufacture (which might produce tonnes of soap), the flavouring probably comes from a synthetic perfume.
|
Public Group
# Help me with my application?
This topic is 4312 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hello. I'm finally completed with my first application in Direct X. However I get the feeling that it's really messy and inefficent. Would anybody like to help me out? I've got the actual application and source code posted at this location. http://www.guilddnr.net/dunescape/d3d_app.rar (this code was compiled with Visual C++ .NET 2003 edition, with the Direct X 9.0c SDK) Additionally, here is the code plainly:
//theJ89's Direct X Test and window creation program
#include <windows.h> //Includes windows,
#include <d3d9.h> //D3D,
#include <time.h> //D3D,
#include <d3dx9.h> //D3D,
#include <dxerr9.h> //and the D3D error handler?
//Removes rarely used things in windows.
#define WIN32_LEAN_AND_MEAN
//Describes the format of the vertex...
//Even though I make my own custom vertex structure I think it has to conform to these specifications.
#define POINT_FLAGS (D3DFVF_XYZRHW | D3DFVF_TEX1)
//Width and height of the menu along with the name of my program.
#define APP_NAME "theJ89's DirectX Test Program"
#define SCREEN_WIDTH 1024
#define SCREEN_HEIGHT 768
#define MAX_TEXTURES 16
//Window and instance pointers
HWND window;
HINSTANCE hInst;
//Program stops running when bRun is set to false
bool bRun=true;
//Alert displays a message box with the name of the application, an "OK" button, an exclamation mark and the intended message.
{
ShowCursor(true);
if(window!=NULL){
MessageBox(window,lpMessage,APP_NAME,MB_OK | MB_ICONEXCLAMATION);
} else {
MessageBox(NULL,lpMessage,APP_NAME,MB_OK | MB_ICONEXCLAMATION);
}
ShowCursor(false);
}
//Confirm is like alert, but instead it has a question icon, yes/no buttons, and it returns the user's choice - true for yes and false for no.
bool Confirm(LPCTSTR lpMessage)
{
ShowCursor(true);
int returnedValue=0;
if(window!=NULL){
returnedValue=MessageBox(window,lpMessage,APP_NAME,MB_YESNO | MB_ICONQUESTION);
} else {
returnedValue=MessageBox(NULL,lpMessage,APP_NAME,MB_YESNO | MB_ICONQUESTION);
}
ShowCursor(false);
if(returnedValue==IDYES){
return true;
} else {
return false;
}
}
//This is my vertex format, according to the second DX3D tutorial. It's got X, Y, and Z along with a reciprocol homogeneous w component and the color.
struct Vertex2D
{
float x,y,z,rhw,u,v;
D3DCOLOR color;
};
//This is my array of vertices, arranged like so:
/*
1----2
| / |
| / |
0----3
*/
class Image
{
public:
Image();
Image(float x, float y, float width, float height, IDirect3DTexture9* cTex);
~Image();
void Move(float x, float y);
void UpdateVerts();
bool show;
float x,y,width,height,rotation;
short unsigned int index;
D3DCOLOR color;
IDirect3DTexture9* cSource;
};
Image::Image()
{
//Init index to -1; find open slot.
int oIndex=-1;
{
if(cImages==NULL)
{
oIndex=i;
}
}
if(oIndex<0)
{
Alert("Attempted to create image but no open slots.");
return;
}
else
{
index=oIndex;
}
cImages[index]=this;
this->show=false;
this->cSource=NULL;
this->x=0;
this->y=0;
this->width=0;
this->height=0;
this->rotation=0.0f;
//Set U, V coordinates and RHW, Z, and Color to defaults
vertices[(index*4)].u=0.0f;
vertices[(index*4)].v=1.0f;
vertices[(index*4)+1].u=0.0f;
vertices[(index*4)+1].v=0.0f;
vertices[(index*4)+2].u=1.0f;
vertices[(index*4)+2].v=0.0f;
vertices[(index*4)+3].u=1.0f;
vertices[(index*4)+3].v=1.0f;
for(int i=0; i<4; i++)
{
vertices[(index*4)+i].rhw=1.0f;
vertices[(index*4)+i].z=1.0f;
vertices[(index*4)+i].color=0xFFFFFFFF;
}
//Set X/Y positions
UpdateVerts();
}
Image::Image(float x, float y, float width, float height, IDirect3DTexture9* cTex)
{
//Init index to -1; find open slot.
int oIndex=-1;
{
if(cImages==NULL)
{
oIndex=i;
}
}
if(oIndex<0)
{
Alert("Attempted to create image but no open slots.");
return;
}
else
{
index=oIndex;
}
cImages[index]=this;
this->show=true;
this->cSource=cTex;
this->x=x;
this->y=y;
this->width=width;
this->height=height;
this->rotation=0.0f;
//Set U, V coordinates and RHW, Z, and Color to defaults
vertices[(index*4)].u=0.0f;
vertices[(index*4)].v=1.0f;
vertices[(index*4)+1].u=0.0f;
vertices[(index*4)+1].v=0.0f;
vertices[(index*4)+2].u=1.0f;
vertices[(index*4)+2].v=0.0f;
vertices[(index*4)+3].u=1.0f;
vertices[(index*4)+3].v=1.0f;
for(int i=0; i<4; i++)
{
vertices[(index*4)+i].rhw=1.0f;
vertices[(index*4)+i].z=1.0f;
vertices[(index*4)+i].color=0xFFFFFFFF;
}
//Set X/Y positions
UpdateVerts();
}
Image::~Image()
{
cImages[index]=NULL;
}
void Image::Move(float x, float y)
{
this->x=x;
this->y=y;
UpdateVerts();
}
void Image::UpdateVerts()
{
vertices[(index*4)].x=x;
vertices[(index*4)].y=y+height;
vertices[(index*4)+1].x=x;
vertices[(index*4)+1].y=y;
vertices[(index*4)+2].x=x+width;
vertices[(index*4)+2].y=y;
vertices[(index*4)+3].x=x+width;
vertices[(index*4)+3].y=y+height;
}
//DXGraphics is my singleton class to handle the Direct 3D interface, the device, the vertex buffer, rendering, and manage creation and cleanup.
//Currently I have the Render function set to draw a triangle fan that forms a rect as you can tell by the vertices.
class DXGraphics
{
public:
IDirect3DTexture9* d3dTextures[MAX_TEXTURES];
//Constructor, makes sure all pointers are initilized to NULL
DXGraphics()
{
d3d=NULL;
d3dDevice=NULL;
d3dVertexBuffer=NULL;
for(int i=0; i<MAX_TEXTURES; i++)
{
d3dTextures=NULL;
}
}
//Destructor, releases the vertex buffers, device, and d3d interface
~DXGraphics()
{
for(int i=0; i<MAX_TEXTURES; i++)
{
if(d3dTextures!=NULL)
d3dTextures->Release();
}
if(d3dVertexBuffer!=NULL)
d3dVertexBuffer->Release();
if(d3dDevice!=NULL)
d3dDevice->Release();
if(d3d!=NULL)
d3d->Release();
}
//Initilizes the interface and allows me to create the device...
HRESULT createD3D()
{
if(NULL==(d3d=Direct3DCreate9(D3D_SDK_VERSION)))
{
return E_FAIL;
}
return S_OK;
}
//Creates the device used to handle... um, pretty much everything.
HRESULT createDevice()
{
D3DPRESENT_PARAMETERS d3dPresent;
ZeroMemory(&d3dPresent,sizeof(d3dPresent));
d3dPresent.Windowed=TRUE;
d3dPresent.BackBufferFormat=D3DFMT_UNKNOWN;
if( FAILED( d3d->CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, window, D3DCREATE_HARDWARE_VERTEXPROCESSING, &d3dPresent, &d3dDevice)))
{
return E_FAIL;
}
d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,TRUE);
d3dDevice->SetRenderState(D3DRS_SRCBLEND,D3DBLEND_SRCALPHA);
d3dDevice->SetRenderState(D3DRS_DESTBLEND,D3DBLEND_INVSRCALPHA);
d3dDevice->SetFVF(POINT_FLAGS);
return S_OK;
}
{
int iEmpty=-1;
char message[255];
char cBuffer[33];
ZeroMemory(&message,sizeof(message));
for(int i=0; i<MAX_TEXTURES&&iEmpty<0; i++)
{
if(d3dTextures==NULL)
iEmpty=i;
}
if(iEmpty<0)
{
strcat(message, pszImgPath);
strcat(message, " - No available slots.");
return E_FAIL;
}
if(FAILED(D3DXCreateTextureFromFile(d3dDevice, pszImgPath, &d3dTextures[iEmpty]))){
strcat(message, "Attempting to load texture ");
strcat(message, pszImgPath);
strcat(message, " has failed.");
return E_FAIL;
}
strcat(message, pszImgPath);
itoa(iEmpty,cBuffer,10);
strcat(message, "(slot ");
strcat(message, cBuffer);
strcat(message, ")");
strcat(message, " has succeeded!");
return S_OK;
}
//Creates a vertex buffer for my polygons.
HRESULT createVertexBuffer()
{
if( FAILED( d3dDevice->CreateVertexBuffer(MAX_QUADS*4*sizeof(Vertex2D),0, POINT_FLAGS, D3DPOOL_DEFAULT, &d3dVertexBuffer, NULL ) ) )
{
return E_FAIL;
}
return S_OK;
}
//Copies the vertexes from the array in physical memory to the buffer on video memory, if I understood that correctly.
HRESULT fillVertexBuffer()
{
VOID* pVertices;
if(FAILED(d3dVertexBuffer->Lock(0,sizeof(vertices),(void**)&pVertices,0)))
{
Alert("Attempting to fill the vertex buffer has failed!");
return E_FAIL;
}
memcpy(pVertices,vertices,sizeof(vertices));
d3dVertexBuffer->Unlock();
//Alert("The vertex buffer has been filled!");
return S_OK;
}
//Clears the back buffer to black, starts drawing the scene,
//sets the stream to the buffer, sets the vertex format (I don't think
//it should be here but whatever works), and lastly draws the
//primative - a triangle fan comprised of the four vertices.
//From there it just ends the scene and displays it.
void Render()
{
if(d3dDevice!=NULL)
{
d3dDevice->Clear(0,NULL,D3DCLEAR_TARGET,D3DCOLOR_XRGB(0,0,0),1.0f,0);
if(SUCCEEDED(d3dDevice->BeginScene()))
{
d3dDevice->SetStreamSource(0,d3dVertexBuffer,0,sizeof(Vertex2D));
{
if(cImages!=NULL&&cImages->show)
{
d3dDevice->SetTexture(0,cImages->cSource);
d3dDevice->SetTextureStageState(0,D3DTSS_COLOROP,D3DTOP_SELECTARG1);
d3dDevice->SetTextureStageState(0,D3DTSS_COLORARG1,D3DTA_TEXTURE);
d3dDevice->SetTextureStageState(0,D3DTSS_COLORARG2,D3DTA_DIFFUSE);
d3dDevice->SetTextureStageState(0,D3DTSS_ALPHAOP,D3DTOP_SELECTARG1);
d3dDevice->SetTextureStageState(0,D3DTSS_ALPHAARG1,D3DTA_TEXTURE);
d3dDevice->SetTextureStageState(0,D3DTSS_ALPHAARG2,D3DTA_DIFFUSE);
d3dDevice->DrawPrimitive(D3DPT_TRIANGLEFAN,i*4,2);
}
}
d3dDevice->EndScene();
}
d3dDevice->Present(NULL,NULL,NULL,NULL);
}
}
private:
IDirect3D9* d3d;
IDirect3DDevice9* d3dDevice;
IDirect3DVertexBuffer9* d3dVertexBuffer;
};
//Creates the singleton object that handles Direct3D
DXGraphics dx3d;
Image* pMyImage=NULL;
//Message Handler for windows
LRESULT CALLBACK WindowProcedure(HWND hWnd, UINT uMessage, WPARAM wParam, LPARAM lParam)
{
POINTS coords;
switch(uMessage)
{
case WM_DESTROY:
bRun=false;
break;
case WM_MOUSEMOVE:
coords = MAKEPOINTS (lParam);
if(pMyImage!=NULL) {
pMyImage->Move(coords.x-16,coords.y-16);
}
break;
case WM_LBUTTONDOWN:
bRun=false;
break;
/*
case WM_PAINT:
dx3d.Render();
ValidateRect( hWnd, NULL );
break;
*/
default:
break;
}
return DefWindowProc(hWnd,uMessage,wParam,lParam);
}
//I made a function to create a window because I like having everything for this in one function.
int MakeWindow(HINSTANCE hInstance)
{
WNDCLASSEX wc;
wc.cbSize = sizeof(WNDCLASSEX);
wc.style = CS_HREDRAW | CS_VREDRAW;
wc.lpfnWndProc = WindowProcedure;
wc.cbClsExtra = 0;
wc.cbWndExtra = 0;
wc.hInstance = hInstance;
wc.hIcon = NULL;
wc.hCursor = NULL;
wc.hbrBackground = (HBRUSH) GetStockObject (BLACK_BRUSH);
wc.lpszClassName = "DXTest";
wc.hIconSm = NULL;
if(!RegisterClassEx(&wc))
return false;
window=CreateWindowEx(NULL,"DXTest",APP_NAME,WS_CAPTION | WS_VISIBLE,0,0,SCREEN_WIDTH,SCREEN_HEIGHT,NULL,NULL,hInstance,NULL);
if(!window)
return false;
return true;
}
//Main windows function. Message loop, initilizes Direct X and renders the scene.
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCommandLine, int nCommandShow)
{
MSG msg;
hInst=hInstance;
int iTime=time(NULL);
int framecount=0;
float fps=1;
ZeroMemory(&vertices,sizeof(vertices));
{
cImages=NULL;
}
MakeWindow(hInst);
ShowCursor(false);
if(FAILED(dx3d.createD3D()))
return 0;
if(FAILED(dx3d.createDevice()))
return 0;
if(FAILED(dx3d.createVertexBuffer()))
return 0;
return 0;
return 0;
Image MyImages[100]={
Image(0,0,32,32,dx3d.d3dTextures[0]),
Image(32,0,32,32,dx3d.d3dTextures[0]),
Image(64,0,32,32,dx3d.d3dTextures[0]),
Image(96,0,32,32,dx3d.d3dTextures[0]),
Image(128,0,32,32,dx3d.d3dTextures[0]),
Image(160,0,32,32,dx3d.d3dTextures[0]),
Image(192,0,32,32,dx3d.d3dTextures[0]),
Image(224,0,32,32,dx3d.d3dTextures[0]),
Image(256,0,32,32,dx3d.d3dTextures[0]),
Image(288,0,32,32,dx3d.d3dTextures[0]),
Image(0,32,32,32,dx3d.d3dTextures[0]),
Image(32,32,32,32,dx3d.d3dTextures[0]),
Image(64,32,32,32,dx3d.d3dTextures[0]),
Image(96,32,32,32,dx3d.d3dTextures[0]),
Image(128,32,32,32,dx3d.d3dTextures[0]),
Image(160,32,32,32,dx3d.d3dTextures[0]),
Image(192,32,32,32,dx3d.d3dTextures[0]),
Image(224,32,32,32,dx3d.d3dTextures[0]),
Image(256,32,32,32,dx3d.d3dTextures[0]),
Image(288,32,32,32,dx3d.d3dTextures[0]),
Image(0,64,32,32,dx3d.d3dTextures[0]),
Image(32,64,32,32,dx3d.d3dTextures[0]),
Image(64,64,32,32,dx3d.d3dTextures[0]),
Image(96,64,32,32,dx3d.d3dTextures[0]),
Image(128,64,32,32,dx3d.d3dTextures[0]),
Image(160,64,32,32,dx3d.d3dTextures[0]),
Image(192,64,32,32,dx3d.d3dTextures[0]),
Image(224,64,32,32,dx3d.d3dTextures[0]),
Image(256,64,32,32,dx3d.d3dTextures[0]),
Image(288,64,32,32,dx3d.d3dTextures[0]),
Image(0,96,32,32,dx3d.d3dTextures[0]),
Image(32,96,32,32,dx3d.d3dTextures[0]),
Image(64,96,32,32,dx3d.d3dTextures[0]),
Image(96,96,32,32,dx3d.d3dTextures[0]),
Image(128,96,32,32,dx3d.d3dTextures[0]),
Image(160,96,32,32,dx3d.d3dTextures[0]),
Image(192,96,32,32,dx3d.d3dTextures[0]),
Image(224,96,32,32,dx3d.d3dTextures[0]),
Image(256,96,32,32,dx3d.d3dTextures[0]),
Image(288,96,32,32,dx3d.d3dTextures[0]),
Image(0,128,32,32,dx3d.d3dTextures[0]),
Image(32,128,32,32,dx3d.d3dTextures[0]),
Image(64,128,32,32,dx3d.d3dTextures[0]),
Image(96,128,32,32,dx3d.d3dTextures[0]),
Image(128,128,32,32,dx3d.d3dTextures[0]),
Image(160,128,32,32,dx3d.d3dTextures[0]),
Image(192,128,32,32,dx3d.d3dTextures[0]),
Image(224,128,32,32,dx3d.d3dTextures[0]),
Image(256,128,32,32,dx3d.d3dTextures[0]),
Image(288,128,32,32,dx3d.d3dTextures[0]),
Image(0,160,32,32,dx3d.d3dTextures[0]),
Image(32,160,32,32,dx3d.d3dTextures[0]),
Image(64,160,32,32,dx3d.d3dTextures[0]),
Image(96,160,32,32,dx3d.d3dTextures[0]),
Image(128,160,32,32,dx3d.d3dTextures[0]),
Image(160,160,32,32,dx3d.d3dTextures[0]),
Image(192,160,32,32,dx3d.d3dTextures[0]),
Image(224,160,32,32,dx3d.d3dTextures[0]),
Image(256,160,32,32,dx3d.d3dTextures[0]),
Image(288,160,32,32,dx3d.d3dTextures[0]),
Image(0,192,32,32,dx3d.d3dTextures[0]),
Image(32,192,32,32,dx3d.d3dTextures[0]),
Image(64,192,32,32,dx3d.d3dTextures[0]),
Image(96,192,32,32,dx3d.d3dTextures[0]),
Image(128,192,32,32,dx3d.d3dTextures[0]),
Image(160,192,32,32,dx3d.d3dTextures[0]),
Image(192,192,32,32,dx3d.d3dTextures[0]),
Image(224,192,32,32,dx3d.d3dTextures[0]),
Image(256,192,32,32,dx3d.d3dTextures[0]),
Image(288,192,32,32,dx3d.d3dTextures[0]),
Image(0,224,32,32,dx3d.d3dTextures[0]),
Image(32,224,32,32,dx3d.d3dTextures[0]),
Image(64,224,32,32,dx3d.d3dTextures[0]),
Image(96,224,32,32,dx3d.d3dTextures[0]),
Image(128,224,32,32,dx3d.d3dTextures[0]),
Image(160,224,32,32,dx3d.d3dTextures[0]),
Image(192,224,32,32,dx3d.d3dTextures[0]),
Image(224,224,32,32,dx3d.d3dTextures[0]),
Image(256,224,32,32,dx3d.d3dTextures[0]),
Image(288,224,32,32,dx3d.d3dTextures[0]),
Image(0,256,32,32,dx3d.d3dTextures[0]),
Image(32,256,32,32,dx3d.d3dTextures[0]),
Image(64,256,32,32,dx3d.d3dTextures[0]),
Image(96,256,32,32,dx3d.d3dTextures[0]),
Image(128,256,32,32,dx3d.d3dTextures[0]),
Image(160,256,32,32,dx3d.d3dTextures[0]),
Image(192,256,32,32,dx3d.d3dTextures[0]),
Image(224,256,32,32,dx3d.d3dTextures[0]),
Image(256,256,32,32,dx3d.d3dTextures[0]),
Image(288,256,32,32,dx3d.d3dTextures[0]),
Image(0,288,32,32,dx3d.d3dTextures[0]),
Image(32,288,32,32,dx3d.d3dTextures[0]),
Image(64,288,32,32,dx3d.d3dTextures[0]),
Image(96,288,32,32,dx3d.d3dTextures[0]),
Image(128,288,32,32,dx3d.d3dTextures[0]),
Image(160,288,32,32,dx3d.d3dTextures[0]),
Image(192,288,32,32,dx3d.d3dTextures[0]),
Image(224,288,32,32,dx3d.d3dTextures[0]),
Image(256,288,32,32,dx3d.d3dTextures[0]),
Image(288,288,32,32,dx3d.d3dTextures[0])
};
Image Cursor=Image(288,288,32,32,dx3d.d3dTextures[1]);
pMyImage=&Cursor;
for(int i=0; i<100; i++)
{
MyImages;
}
while (bRun)
{
framecount++;
if(time(NULL)>iTime) {
fps=framecount;
framecount=0;
}
//Exactly what is the difference between PeekMessage and GetMessage?
if (PeekMessage(&msg,NULL,0,0,PM_REMOVE))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
//The vertex buffer is filled each frame because I used to have something that moved the vertexes one pixel each frame, and needed to refresh the buffer.
dx3d.fillVertexBuffer();
//Last step, render the scene with the singleton's functions.
dx3d.Render();
}
return 0;
}
I've created two classes to handle Direct 3D, for the most part - DXGraphics and Image - and of course a custom struct for my vertex which contains x,y,z,rhw,u,v, and a D3DCOLOR. As you can tell by the RHW parameter, I am making a 2D application with Direct 3D. I have defined the max number of quads (256 last I checked) and the max number of textures (16). These limits are set when the application is compiled. The size of the vertex buffer, and the size of my global vertex array is equal to MAX_QUADS*4. My class, DXGraphics, contains the array of textures (however it is not a static variable, instead it is simply an array of IDirect3DTexture9 the size of MAX_TEXTURES (16)). Only one object is of the DXGraphics class, and I call it dx3d. Whenever my DXGraphics object constructs, it sets all of it's interface pointers to NULL. Likewise it releases all of the interfaces not equal to NULL whenever it's destructed. It contains a D3D interface pointer, Device interface pointer, Vertex Buffer pointer, and of course 16 texture interface pointers for loaded textures (however at the time only two are used). I have functions created for initilizing each of these interfaces. Additionally in DXGraphics there are also functions for filling the vertex buffer and rendering. My image class, Image, is used to handle and display quads. Image contains the float variables X, Y, Width, Height, and Rotation. Additionally there is the integer value index, a boolean value show which determines if the Image is drawn, and a D3DCOLOR (also integer I believe but whatever) variable Color. The image class contains functions for constructing and destructing the Image class, Moving the image onscreen, and updating the vertices. Every Image when constructed is automatically placed in the first available slot in a global array of Image pointers. Likewise when the Image is destructed the pointer is set to NULL. The vertices associated with the Image are determined by the Image's index in this array - so, if it's index was 5, then the vertices associated with it would be 20,21,22, and 23, because 5*4=20. This keeps track of the Images so that the renderer can know what vertices in the array to draw. So far this setup has worked wonderfully for me, but I am very unexperienced and this is only a simple setup. Could you possibly give me some pointers? [Edited by - theJ89 on September 26, 2006 4:32:07 PM]
##### Share on other sites
I've updated the first post to be more organized and detailed.
##### Share on other sites
You will probrably get alot more responses if you post the code in with your posts instead of getting people to download it. Just paste it into [ source ] and [ / source] blocks in your post ( loose the spaces though ).
From your description, is there a reason you are using fixed sized arrays instead of Vectors? If not, without having read your code that would be my first suggestion. I will have more if you post the code in [smile]
##### Share on other sites
Quote:
Original post by SerapthYou will probrably get alot more responses if you post the code in with your posts instead of getting people to download it. Just paste it into [ source ] and [ / source] blocks in your post ( loose the spaces though ).From your description, is there a reason you are using fixed sized arrays instead of Vectors? If not, without having read your code that would be my first suggestion. I will have more if you post the code in [smile]
Thank you for the reply! I'll post the code in source tags now.
I have been told Vectors are slow for fast access purposes, so I have been using arrays.
##### Share on other sites
Quote:
Original post by theJ89I have been told Vectors are slow for fast access purposes, so I have been using arrays.
That is for the most part, untrue.
You may want to check this thread:
http://www.gamedev.net/community/forums/topic.asp?topic_id=268669
From my own experience, vectors can be slightly slower in debug builds then arrays, build then... its debug. In 99.9% of cases, who cares. In release builds, Vectors and arrays perform almost identically.
The lose in flexability generally isnt worth the performance trade off, even if one exists. Not to mention, if you have a vector of 2 objects, vs a static array, pre-allocated to 16, you have major waste, and probrably slower performance.
##### Share on other sites
Keep in mind, alot of what im about to say is nitpicking. Just the little stuff you might not see or care about. First of, you really should consider using STL classes instead of fixed sized containers, you would make your code a whole lot easier.
That said, for the most part it all looks good. This is going to sound like an odd compliment, but your commenting style is very good. You actually write comments that might help you, as opposed to doing it "because your supposed to" or *shudder* not doing it at all.
#defines are nasty evil. Switch them to const strings. This makes a night and day difference when debugging, and for other people working with your code.
for(int i=0; i<MAX_QUADS&&oIndex<0; i++) { if(cImages==NULL) { oIndex=i; } }
Is a bug waiting to happen. In debug mode, the compiler zeros all memory, however in release it doesnt. If you dont implicity ZeroMemory() or memset() to 0 cImages array, this code could lead to a hard to find bug down the road. ( Another reason to use vectors [smile] )
Also, this is a matter of opinion, but move oIndex out of the for loop and replace it with a break statement. Its much easier to read and maintain:
for(int i=0; i<MAX_QUADS; i++) { if(cImages==NULL) { oIndex=i; break; } }
I havent gotten to where cImages[index] was called ( Aka, the call to new Image() ), but if you newed the Image() this is a memory leak.
Image::~Image()
{
cImages[index]=NULL;
}
EDIT::: NOPE, not a memory leak, you statically declared every Image. Still leaving this comment in as an object lesson to others that might read this, or for yourself if you decide to go more dynamic in the future.
Potential gotcha bug or exploit:
strcat(message, pszImgPath);
strcat(message, " - No available slots.");
If pszImgPath ends up being really long, you can have a buffer overflow here. Probrably not a big deal, but still worth checking the size of pszImgPath before copying to an memory buffer with a fixed sized limit ( 255 ). Youu repeat this behavour a few times.
Call me Mr Anti compound logic in an if statement guy, but I would break this into two tests. Keep in mind, the compiler compiles it down to the same code in the end:
if(cImages!=NULL&&cImages->show)
Finally:
Image MyImages[100]={
Image(0,0,32,32,dx3d.d3dTextures[0]),
Image(32,0,32,32,dx3d.d3dTextures[0]),
Image(64,0,32,32,dx3d.d3dTextures[0]),
Image(96,0,32,32,dx3d.d3dTextures[0]),
<snip><snip>
My god man, put that in a loop! ;) You could drop about 70+ lines of code easily there. Also, ill use this time once again to recommend vectors :)
You gotta admit, this is a bit simpler:
for(int y = 0; y < 10; y ++) { for(int x = 0; x< 10; x++) { Image(x*32,y*32,32,32,dx3d.d3dTextures[0]) } }
Btw... if the results turn out reverse, that was done in my head :) just flip x and y :)
[Edited by - Serapth on September 26, 2006 5:09:08 PM]
##### Share on other sites
Quote:
Original post by SerapthKeep in mind, alot of what im about to say is nitpicking. Just the little stuff you might not see or care about. First of, you really should consider using STL classes instead of fixed sized containers, you would make your code a whole lot easier.
Thanks for the reply. I am considering using Linked lists or vectors, which ever is faster.
Quote:
Original post by SerapthThat said, for the most part it all looks good. This is going to sound like an odd compliment, but your commenting style is very good. You actually write comments that might help you, as opposed to doing it "because your supposed to" or *shudder* not doing it at all.
Thank you, when I was working on a game project written in javascript (it was a scripted game created with HTML) I made sure that I commented each function specifically so that if someone else were to read it, it would help them out. I'm kind of being slack on this because I'm not taking it as seriously as I should be.
Quote:
Original post by Serapth#defines are nasty evil. Switch them to const strings. This makes a night and day difference when debugging, and for other people working with your code.
I've been wondering why this is. From what I can tell #define just replaces all instances of the keyword with the given code when it's building the source files. I can see how this would considerably increase the size of the file if, say for example you defined a string and then used the keyword all over the file, thus creating the same string in several places in the file which would be un-necessary. From what I understand consts are just read-only memory. The reason I was using defines is that I didn't want to take up unnecessary memory for things that were not called very often.
Quote:
Original post by Serapth*** Source Snippet Removed ***Is a bug waiting to happen. In debug mode, the compiler zeros all memory, however in release it doesnt. If you dont implicity ZeroMemory() or memset() to 0 cImages array, this code could lead to a hard to find bug down the road. ( Another reason to use vectors [smile] )Also, this is a matter of opinion, but move oIndex out of the for loop and replace it with a break statement. Its much easier to read and maintain:*** Source Snippet Removed ***I havent gotten to where cImages[index] was called ( Aka, the call to new Image() ), but if you newed the Image() this is a memory leak.Image::~Image(){ cImages[index]=NULL;}EDIT::: NOPE, not a memory leak, you statically declared every Image. Still leaving this comment in as an object lesson to others that might read this, or for yourself if you decide to go more dynamic in the future.
Actually, cImages is not used for dynamically allocating images in. It's used for keeping track of images in the program. Now that you mention it, I should use break. I don't use break very often. Plus, that's one less condition check I need to do in that loop.
Quote:
Original post by SerapthPotential gotcha bug or exploit: strcat(message, "Cannot load texture "); strcat(message, pszImgPath); strcat(message, " - No available slots."); Alert(message);If pszImgPath ends up being really long, you can have a buffer overflow here. Probrably not a big deal, but still worth checking the size of pszImgPath before copying to an memory buffer with a fixed sized limit ( 255 ). Youu repeat this behavour a few times.
Ahhh, yeah, I noticed this here! I couldn't come up with an acceptable solution so I just left it that way. With other languages I don't have to worry about buffer sizes so I don't really know a solution to it offhand.
Quote:
Original post by SerapthCall me Mr Anti compound logic in an if statement guy, but I would break this into two tests. Keep in mind, the compiler compiles it down to the same code in the end:if(cImages!=NULL&&cImages->show)
I figured it was just a matter of style. According to a book on C++ I read if the first condition is false none of the other conditions are even considered - but, the way you mentioned is usually the way I do it.
Quote:
Original post by SerapthFinally: Image MyImages[100]={ Image(0,0,32,32,dx3d.d3dTextures[0]), Image(32,0,32,32,dx3d.d3dTextures[0]), Image(64,0,32,32,dx3d.d3dTextures[0]), Image(96,0,32,32,dx3d.d3dTextures[0]), My god man, put that in a loop! ;) You could drop about 70+ lines of code easily there. Also, ill use this time once again to recommend vectors :)
Oh god I know. I was hoping to use a default constructer and then manually move them in a loop but then C++ threw the whole "you can't a default constructor with arguments" crap at me. Oh well.
Quote:
Original post by Serapthfor(int y = 0; y < 10; y ++) { for(int x = 0; x< 10; x++) { Image(x*32,y*32,32,32,dx3d.d3dTextures[0]) } }
Only problem I can see here is that "Image" isn't stored in a variable. It needs to be, the only thing that cImages does is point to existing images.
Thanks for all of the help, I'll be working on my code to see if I can improve it.
##### Share on other sites
You are worrying far to much about optomization. Write it correctly first, then make it fast.
For example:
I figured it was just a matter of style. According to a book on C++ I read if the first condition is false none of the other conditions are even considered - but, the way you mentioned is usually the way I do it.
Technically this is true. It will do only the first condition if there is a failure. Yet, broken into two if statements results in the exact same compiled code. If the outer if block fails, the inner if is never evaluated.
As to preventing buffer overflows, the easiest answer to is validate the parameter size before passing it into a method that takes a fixed size buffer. Instead of constructing your string in 3 parts, build it all in advance, then check the size of the resulting string before passing it to strcat. Microsoft just spent a ton of money and time doing this to all of the Windows source code, so obviously performance isnt a huge issue.
Finally, the reason why #defines are evil is one, they are easily buried in code, and two, they dont evaluate in the debugger. When debugging, the const string is a thousand times easier to comprehend and the performance tradeoff is all but none existant.
##### Share on other sites
Oh! Yes, one last thing if you will. I am having trouble using diffusion to color my quads. For some reason, they become a jumble of red and black lines. For the time being, D3DFVF_DIFFUSE has been removed from the FVF. Do you know a possible solution for this?
1. 1
2. 2
Rutin
21
3. 3
JoeJ
18
4. 4
5. 5
gaxio
12
• 14
• 40
• 23
• 13
• 13
• ### Forum Statistics
• Total Topics
631724
• Total Posts
3001897
×
|
## 6.2 Non-Uniform Random Numbers
Uniform random numbers are useful, but usually we want to generate random numbers from some non-uniform distribution. There are a few ways to do this depending on the distribution.
### 6.2.1 Inverse CDF Transformation
The most generic method (but not necessarily the simplest) uses the inverse of the cumulative distribution function of the distribution.
Suppose we wnat to draw samples from a distribution with density $$f$$ and cumulative distribution function $$F(x) = \int_{-\infty}^x f(t)\,dt$$. Then we can do the following:
1. Draw $$U\sim \text{Unif}(0, 1)$$ using any suitable PRNG.
2. Let $$X = F^{-1}(U)$$. Then $$X$$ is distributed according to $$f$$.
Of course this method requires the inversion of the CDF, which is usually not possible. However, it works well for the Exponential($$\lambda$$) distribution. Here, we have that $f(x) = \frac{1}{\lambda}e^{-x/lambda}$ and $F(x) = 1-e^{-x/\lambda}.$ Therefore, the inverse of the CDF is $F^{-1}(u) = -\lambda\log(1-u).$
First we can draw our uniform random variables.
set.seed(2017-12-4)
u <- runif(100)
hist(u)
rug(u)
Then we can apply the inverse CDF.
lambda <- 2 ## Exponential with mean 2
x <- -lambda * log(1 - u)
hist(x)
rug(x)
The problem with this method is that inverting the CDF is usually a difficult process and so other methods will be needed to generate other random variables.
### 6.2.2 Other Transformations
The inverse of the CDF is not the only function that we can use to transform uniform random variables into random variables with other distributions. Here are some common transformations.
To generate Normal random variables, we can
1. Generate $$U_1, U_2\sim\text{Unif}(0, 1)$$ using a standard PRNG.
2. Let $\begin{eqnarray*} Z_1 & = & \sqrt{-2\log U_1}\cos(2\pi\, U_2)\\ Z_2 & = & \sqrt{-2\log U_1}\sin(2\pi\, U_2) \end{eqnarray*}$ Then $$Z_1$$ and $$Z_2$$ are distributed independent $$N(0, 1)$$.
What about multivariate Normal random variables with arbitrary covariance structure? This can be done by applying an affine transformation to independent Normals.
If we want to generate $$X\sim\mathcal{N}(\mu, \Sigma)$$, we can
1. Generate $$Z\sim\mathcal{N}(0, I)$$ where $$I$$ is the identity matrix;
2. Let $$\Sigma = LL^\prime$$ be the Cholesky decomposition of $$\Sigma$$.
3. Let $$X = \mu + Lz$$. Then $$X\sim\mathcal{N}(\mu, \Sigma)$$.
In reality, you will not need to apply any of the transformations described above because almost any worthwhile analytical software system will have these generators built in, if not carved in stone. However, once in a while, it’s still nice to know how things work.
|
This function computes the optimal model parameters using one of three different model selection criteria (aic, bic, gmdl) and based on two different Degrees of Freedom estimates for PLS.
pls.ic(
X,
y,
m = min(ncol(X), nrow(X) - 1),
criterion = "bic",
naive = FALSE,
use.kernel = FALSE,
compute.jacobian = FALSE,
verbose = TRUE
)
## Arguments
X matrix of predictor observations. vector of response observations. The length of y is the same as the number of rows of X. maximal number of Partial Least Squares components. Default is m=ncol(X). Choice of the model selection criterion. One of the three options aic, bic, gmdl. Use the naive estimate for the Degrees of Freedom? Default is FALSE. Use kernel representation? Default is use.kernel=FALSE. Should the first derivative of the regression coefficients be computed as well? Default is FALSE If TRUE, the function prints a warning if the algorithms produce negative Degrees of Freedom. Default is TRUE.
## Value
The function returns an object of class "plsdof".
DoF
Degrees of Freedom
m.opt
optimal number of components
sigmahat
vector of estimated model errors
intercept
intercept
coefficients
vector of regression coefficients
covariance
if compute.jacobian=TRUE and use.kernel=FALSE, the function returns the covariance matrix of the optimal regression coefficients.
m.crash
the number of components for which the algorithm returns negative Degrees of Freedom
## Details
There are two options to estimate the Degrees of Freedom of PLS: naive=TRUE defines the Degrees of Freedom as the number of components +1, and naive=FALSE uses the generalized notion of Degrees of Freedom. If compute.jacobian=TRUE, the function uses the Lanczos decomposition to derive the Degrees of Freedom, otherwise, it uses the Krylov representation. (See Kraemer and Sugiyama (2011) for details.) The latter two methods only differ with respect to the estimation of the noise level.
## References
Akaikie, H. (1973) "Information Theory and an Extension of the Maximum Likelihood Principle". Second International Symposium on Information Theory, 267 - 281.
Hansen, M., Yu, B. (2001). "Model Selection and Minimum Descripion Length Principle". Journal of the American Statistical Association, 96, 746 - 774
Kraemer, N., Sugiyama M. (2011). "The Degrees of Freedom of Partial Least Squares Regression". Journal of the American Statistical Association 106 (494) https://www.tandfonline.com/doi/abs/10.1198/jasa.2011.tm10107
Kraemer, N., Braun, M.L. (2007) "Kernelizing PLS, Degrees of Freedom, and Efficient Model Selection", Proceedings of the 24th International Conference on Machine Learning, Omni Press, 441 - 448
Schwartz, G. (1979) "Estimating the Dimension of a Model" Annals of Statistics 26(5), 1651 - 1686.
pls.model, pls.cv
## Author
Nicole Kraemer, Mikio L. Braun
## Examples
n<-50 # number of observations
p<-5 # number of variables
X<-matrix(rnorm(n*p),ncol=p)
y<-rnorm(n)
# compute linear PLS
pls.object<-pls.ic(X,y,m=ncol(X))
|
## Tuesday, November 13, 2012
### Fearing the Bond Viligantes
Paul Krugman posted on his blog an interesting write-up of a model which proposes that an "attack" by what he has called the "invisible bond vigilantes" is expansionary. In other words, an increase in the lending premium on sovereign debts should increase aggregate demand and thereby real GDP.
Krugman's mechanism makes a great deal of sense: (1) the increase in the risk premium generates depreciation of the currency for all domestic interest rates; (2) the lower exchange rate of the dollar causes an increase in net exports; and (3) thus we have a shifting to the left of the IS curve in the IS-LM model for all interest rates.
I agree with Krugman on several points, but in this post I want to propose a simple model which might make the best form of the counter-argument -- i.e. that such an increase in the risk premium would be contractionary. I also note a few areas in which his model and also my own make some assumptions which I, well, wouldn't bet very much on at all.
(Nick Rowe also has some brief comments on Krugman's model here, and so do Tyler Cowen, Brad deLong, David Beckworth and Scott Sumner.)
Let me first note the general areas of agreement. It's pretty obvious that a country with a floating exchange rate and an independent currency (like the U.S. or U.K.) is in a totally different situation than a country without either (like Greece or Spain) insofar as debt can create default risk. I also agree with some key components of the model -- you'll see me re-use much of his framework in a moment -- and with the insight that an increase in the risk premium should generate higher net exports and thereby, cet. par., higher real GDP in the short run.
But here's how I thought about this. In terms of intuition, an increase in the risk premium is a supply shock. A central bank which follows some sort of rule regarding price stability should be somewhat constrained as to how much devaluation they can allow. From this, one can reason that the sign on the effect to real GDP should always be negative, given that the central bank will split the impact between a decline in real GDP and an increase in the price level according to its preferences.
And that's what one sees come through in this little model.
First, start with the linearized demand function y = -ar + be, where r is the real interest rate, e is the nominal exchange rate in terms of the price of foreign currency, and a and b are constants. Note the signs of the terms mean that an increase in the real interest rate decreases real GDP and an increase in exchange rate increases real GDP -- i.e. when foreign currency is more valuable, our net exports increase.
Second, assume a real domestic interest rate determined by r = r* + p, where r* is the real risk-free rate of return and p is the risk premium. An increase in the risk premium increases the interest rate.
Third, assume that purchasing power parity (PPP) holds to some reasonable extent in some reasonably short time frame. That is to say, an identical tradable goods and services should cost roughly the same whether I'm buying them in dollars, yen, etc. We can write this out as P = ePf, where P is the domestic price level, e is (again) the exchange rate, and Pf is the foreign price level.
Fourth, assume some monetary policy rule. Here are three which I think characterize the broad swath of options, which I proceed to consider individually.
(1) price-level targeting: P = P*
(2) NGDP targeting: Py = N
(3) a Taylor-type rule: P + Ty = k, T > 0
To consider (1) with the assumptions explained above:
e = P* / Pf
y = -a(r* + p) + b(P* / Pf)
∂y/∂p = -a.
This means that real GDP will fall in response to an increase in the risk premium, due to its effect on real domestic investment, when the central bank is targeting the price level. (This should also be the response of an inflation-targeting central bank.) The strength of this effect will depend on the "a" parameter, which is the responsiveness of domestic investment to changes in the real interest rate.
To consider (2):
$e = \frac{N}{yP_f} \\ y = -a ( r^* + \rho ) + b\frac{N}{y P_f }\\ y^2 + ay ( r^* + \rho ) + b\frac{N}{y P_f } = 0 \\ \frac{\partial y}{\partial \rho} = \frac{-ya}{2y + a ( r^* + \rho)} \\ \frac{\partial y}{\partial \rho} \approx \frac{-a}{2}$
Again, this means that real GDP will fall, though less than under the previous monetary policy specification, due to the supply shock's effect on domestic investment.
To consider (3),
$e = \frac{k-Ty}{P_f} \\ y = -a ( r^* + \rho ) + b\frac{k-Ty}{P_f} \\ y = \frac{-a ( r^* + \rho ) + \frac{bk}{P_f}}{1 + \frac{bT}{P_F}}\\ \frac{\partial y}{\partial \rho} = \frac{-a}{1 + \frac{bT}{P_F}}\\ 0 < \frac{\partial y}{\partial \rho} < -a$
This finding should make a lot of sense. If T = 0, then there is no weight on output stabilization, and the Taylor-type rule becomes a price-level rule. As T rises, the effect on output is dampened -- though, naturally, by progressively larger increases in the price level.
Making some small and I believe acceptable assumptions -- the aggregate demand function, the determination of an interest rate, a soft PPP, and a monetary policy rule -- we see that an increase in the risk premium is likely contractionary. The intensity of the contraction, furthermore, depends on the willingness of the central bank to accept large increases in the price level to cushion output in the short run.
The major differences with Krugman's model in terms of structure are as follows. First, I assume that domestic investment will feel the sovereign's risk premium. My understanding is that this is consistent with the actual operation of debt markets; the debt of large companies will be knocked down in terms of credit rating when the sovereign's credit rating falls. Second, there are some important real-nominal distinctions in my model versus his; it's not clear why an independent central bank would tolerate the inflation Krugman's model builds in but never directly addresses. I think these explain the opposite findings.
Three further notes. First, I think the assumption of a fixed risk-free rate i* in the context of a run on US sovereign debt is highly strained, for the same reason that the small open economy model is not the same as the large open economy model. Second, when the risk premium rises, the increase in the real interest rate is likely not to capture the full effect on domestic investment -- there are other mechanisms, most importantly tighter lending standards, which will cause an even larger decrease in investment. Third, in the context of an increase in the risk premium on U.S. debt, the U.S. dollar is -- by our experience in the last recession -- likely to appreciate, unless global debt markets are sufficiently strong to withstand a global risk-off. Heavy capital flows into U.S. Treasuries will prevent devaluation for the time period Krugman's model expects an expansionary effect. It is worth noting that all three reasons suggest that an increase in the risk premium is likely to result in a decrease of real GDP in excess of what is predicted by my model.
Correction: I originally had "e" marked as the real exchange rate. It should have been the nominal exchange rate.
1. Evan: I disagree with your model (unless I misunderstand it).
Let me re-write your demand function as y = -ar + b(e-1). Then PPP can be interpreted as "the elasticity parameter b approaches infinity, because domestic and foreign goods become perfect substitutes."
1. Do you disagree with the assumption of PPP? If there's going to be a deviation from it, given flight-to-safety capital inflows into the US, it's going to be on the contractionary side.
2. I do disagree with PPP, but that's not the issue. Suppose I agreed with PPP. I would say PPP is a consequence of your demand function, in the limit as b approaches infinity. You cannot assert PPP as an independent equation, it is like having an overdetermined model.
Take the limit as b approaches infinity. A tiny real exchange rate depreciation would cause an infinitely big increase in output determined.
3. OK, I think I understand what went wrong; for PPP to hold, b must approach infinity, given the definitions. I'm not exactly sure if this can be fixed -- at this point, I'm in a little over my head -- but is the problem corrected by assuming e as the nominal exchange rate, rather than the real exchange rate? Or, is it just that PPP cannot hold in the context of the model? The reason I had invoked it was to connect exchange rates to the domestic price level, and then to monetary policy, which I saw as the underlying issue here. Thanks for your input.
4. The "e" in y=-ar+be should really be the real exchange rate (just as the 'r" should be the real interest rate). The equation is a simplified version of the IS curve in an open economy. Net exports depends on the relative price of foreign to domestic goods, which is the real exchange rate. For example, if the foreign price level doubled, and the exchange rate depreciated by half, net exports should stay the same, because the relative price of foreign and domestic goods would stay the same.
This is really just a specific example of a more general rule. Every equation in a model (except the central bank's reaction function/target/whatever) should be homogenous in nominal variables. If you double all the nominal variables, and leave the real variables unchanged, the equation should still be true. (Except for the central bank's reaction function/target/whatever, because otherwise the price level would be indeterminate). (I might be using the wrong math terminology when I say "homogenous. I got a 'D' in A-level math, a very long time ago.)
5. I think you need some sort of AS curve (or Phillips Curve) to pin down the price level (or inflation rate). Something like y=sP for an upward-sloping SRAS curve.
2. This comment has been removed by the author.
3. A few quick notes: this might be my mistake, but I don't quite follow how you've defined the real exchange rate. As I understand it, in your pseudo-IS function you explicitly label 'e' as the real exchange rate, but then go on to define the law of one price as being: P = P* x e, which doesn't quite make sense to me. The LOP is ordinarily written like that, but the 'e' in that equation corresponds to the 'nominal' exchange rate. That makes sense because, say, in a one good economy, if the consumption item is now more expensive (P is higher) and prices in the foreign country stay the same, the nominal exchange rate must appreciate in response so that the LOP holds.
It seems to me, then, that in case (1) you are simply keeping the real exchange rate constant, which Krugman effectively discusses in his proposed modelling. Price level targeting is, in this instance, a fixed nominal exchange rate regime.
As for (2), I don't quite understand what the monetary authority is supposed to be doing. Setting a target for nominal output is probably less informative than assuming the monetary authority simply sets a target of N for real output. In that event, the TR line in Krugman's model is a vertical line at the desired level of output, and the monetary authority will allow the interest rate to increase in the exact amount of the increase in the risk premium, which ought to have no effect on output (naturally). Setting a target in terms of nominal output, seems to me, would have, again, no effect on output: the increase in risk premium has a negative effect on investment which is perfectly offset by the expansionary effect of real depreciation.
Effectively, we need to think of Krugman's model in the following sense: the monetary authority has no control over actual interest rates, only nominal exchange rates. It uses the wedge between the long run exchange rate and the actual nominal exchange rate as the driving mechanism in the model: the CB sort of gains 'momentary' control of policy by increasing or decreasing that wedge. Thus, his model isn't actually a model of risk premia at all: note that an increase in the rate required by bond holders of the Bundesbank (the safe asset) has exactly the same implications as an increase in the risk premium, and the latter is therefore pretty irrelevant to his conclusions.
As he points out, the main result of the model is that an increase in interest (for whatever reason, I would add) implies currency depreciation via the wedge, and therefore always leads to an increase in output. The question is surely then whether this is the right way to model the relationship between the interest and nominal exchange rates.
1. Thanks -- I think I did mean the nominal exchange rate (I wrote this up in my notebook over the weekend). Your point on (1) is correct. On (2), yes regarding the real-stability-only central bank, but I think a nominal-output target isn't the same in my model. Any devaluation is going to boost the price level and real output; therefore, a return to the same Py will happen before the same y. I agree with your points about the original model, also.
4. Evan, I start from a different premise. The risk premium is currently too high and would be lowered given the appearance of "bond vigilantes". Rising yields would be a sign of recovery and return to normalcy.
1. I understand, but IIRC, didn't Krugman have a post in response to deLong re: a bond bubble, with (what I thought was) convincing evidence that bonds were appropriately priced given the mkt. forecast for future interest rates. You might respond that that path of interest rates is effectively a bet of too-low NGDP -- and I would agree -- but I don't subscribe to the reverse logic, i.e. that increasing interest rates will generate an increase in the NGDP path. Hopefully I'm not misreading you.
2. Evan,
I do not believe or advocate raising interest rates will generate an increase in NGDP path. http://macromarketmusings.blogspot.com/2012/02/can-raising-interest-rates-spark-robust.html
3. Evan,
I did a second post to clarify that I was staring from a different premise than most in this discussion.
|
My Math Forum 2 exact differential equation
Differential Equations Ordinary and Partial Differential Equations Math Forum
July 16th, 2013, 06:14 AM #1 Senior Member Joined: Sep 2012 Posts: 112 Thanks: 0 2 exact differential equation Q.1. I am supposed to find the exactness of the following equation and thereby calculate its solution:- $(x^2dy-y^2dx)/(x-y)^2=0$ My question is should I solve it in the form $[x^2.dy/(x-y)^2] + [-y^2dx/(x-y)^2]= 0$or in the form$x^2dy - y^2dx= 0$ ? Q.2. I don't know how to solve the following problem $xdx + ydy=( y.dx - x.dy)/ x^2 + y^2$ This isn't an exact equation. So I have to transform it to an exact one. For that I need to calculate the integrating factor. But I just can't calculate it. Please help!
July 16th, 2013, 09:41 AM #2 Senior Member Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,211 Thanks: 521 Math Focus: Calculus/ODEs Re: 2 exact differential equation Q1.) In which form is the equation exact?
July 16th, 2013, 11:24 PM #3 Senior Member Joined: Sep 2012 Posts: 112 Thanks: 0 Re: 2 exact differential equation $[x^2.dy/(x-y)^2] + [-y^2dx/(x-y)^2]= 0$ This form is the exact one.
July 17th, 2013, 05:58 PM #4 Math Team Joined: Sep 2007 Posts: 2,409 Thanks: 6 Re: 2 exact differential equation And multiplying both sides by $(x- y)^2$ would destroy that. Recall that every first order equation has a "multiplying factor", function of both variables which, when you multiply both sides of the equation by it, the equation becomes "exact". So of course, multiplying or dividing by a function can make an "exact" equation "non exact".
July 17th, 2013, 10:46 PM #5 Senior Member Joined: Sep 2012 Posts: 112 Thanks: 0 Re: 2 exact differential equation Q.1 The two forms of the equation $(x^2dy-y^2dx)/(x-y)^2=0$ give 2 different solutions. That's why I want to know which one is the correct form to solve. According to the answer page of my book, I should 1st transform the above equation to $x^2dy - y^2dx= 0$ and then find its solution. But I doubt if this will be correct or not. So please tell me if I should solve it in the form $x^2dy - y^2dx= 0$ or in the form $[x^2.dy/(x-y)^2] + [-y^2dx/(x-y)^2]= 0$. That's what I want to know. Q.2 Here I am not able to calculate the integrating factor which turns the inexact equation into an exact one. It's not like I don't know how to calculate the integrating factor, in general. But in this particular problem, the usual process of calculating the integrating factor(as mentioned in my book) isn't working. So I don't know what to do. The fact is I am studying Economics all by myself, without any teacher's help. In Economics, there is of course Mathematics. So I am studying Math with the help of my book and this site. You people have helped me a lot and no matter how many times I say “thanks" to you all it will be of lesser value. So I am posting too much problems here in the hope of getting help. I hope I am not bothering you people too much.
July 18th, 2013, 06:44 AM #6 Math Team Joined: Sep 2007 Posts: 2,409 Thanks: 6 Re: 2 exact differential equation The difficulty is that the the first form is NOT "exact". I should have checked that myself. But $x^2dx+ y^2dy$ is exact- it is the differential of $\frac{x^3}{3}+ \frac{y^3}{3}$. This is a case where $(x- y)^2$ was the "integrating factor".
July 18th, 2013, 07:38 AM #7
Senior Member
Joined: Sep 2012
Posts: 112
Thanks: 0
Re: 2 exact differential equation
Quote:
Originally Posted by HallsofIvy The difficulty is that the the first form is NOT "exact". I should have checked that myself. But $x^2dx+ y^2dy$ is exact- it is the differential of $\frac{x^3}{3}+ \frac{y^3}{3}$. This is a case where $(x- y)^2$ was the "integrating factor".
From where did $x^2dx + y^2dy$ come from? If it's about question 1 then the correct form is $x^2dy - y^2dx$.
July 18th, 2013, 07:54 AM #8 Senior Member Joined: Sep 2012 Posts: 112 Thanks: 0 Re: 2 exact differential equation Again $[(x^2dy)/(x-y)^2 ] + [(-y^2dx)/(x-y)^2]$ is exact. Here$N= x^2/(x-y)^2 and M= (-y^2)/(x-y)^2$. partial derivative of M w.r.t. y = partial derivative of N w.r.t. x= (-2xy)/(x-y)^3 Because of this equality, the equation is exact. This is how my book teaches me to examine the exactness of an equation. Please correct me if I am wrong anywhere.
July 18th, 2013, 08:30 AM #9 Senior Member Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,211 Thanks: 521 Math Focus: Calculus/ODEs Re: 2 exact differential equation Yes, the partials are equal and so: $\frac{x^2}{(x-y)^2}\,dx-\frac{y^2}{(x-y)^2}\,dy=0$ is exact. So what is your first step in obtaining the solution? edit: The equation is also separable...
July 18th, 2013, 09:26 AM #10 Senior Member Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,211 Thanks: 521 Math Focus: Calculus/ODEs Re: 2 exact differential equation Q2.) We are given (presumably): $x\,dx+y\,dy=\frac{y\,dx-x\,dy}{x^2+y^2}$ My first step would be to get the ODE in the form: $M(x,y)\,dx+N(x,y)\,dy=0$ and so multiplying though by $x^2+y^2$ gives: $$$x^2+y^2$$x\,dx+$$x^2+y^2$$y\,dy=y\,dx-x\,dy$ $$$\(x^2+y^2$$x-y\)\,dx+$$\(x^2+y^2$$y+x\)\,dy=0$ The test for exactness reveals that this is inexact. So, we look at: $\frac{(2xy-1)-(2xy+1)}{$$x^2+y^2$$y+x}=-\frac{2}{$$x^2+y^2$$y+x}$ This is not a function of $x$ alone, so we next look at: $\frac{(2xy+1)-(2xy-1)}{$$x^2+y^2$$x-y}=\frac{2}{$$x^2+y^2$$x-y}$ This is not a function of $y$ alone, so computing an integrating factor presents a difficulty. So, I suggest we go back to the original: $x\,dx+y\,dy=\frac{y\,dx-x\,dy}{x^2+y^2}$ and write it in the form: $x+yy'+\frac{xy'-y}{x^2+y^2}=0$ $2x+2yy'-2\frac{1}{1+$$\frac{x}{y}$$^2}\cdot\frac{y-xy'}{y^2}=0$ Integrating, we find the implicit solution: $x^2+y^2-2\tan^{\small{-1}}$$\frac{x}{y}$$=C$
Tags differential, equation, exact
,
### non exact differential equation problems
Click on a term to search for related topics.
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post summsies Differential Equations 2 July 24th, 2013 05:24 PM Survivornic Differential Equations 3 October 1st, 2012 07:02 PM Survivornic Differential Equations 2 September 30th, 2012 02:40 PM jakeward123 Differential Equations 23 March 10th, 2011 02:17 PM realritybugll Algebra 1 June 9th, 2009 07:15 AM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
Capital Requirements Regulation (CRR)
Article 520 — Amendment of Regulation (EU) No 648/2012
Regulation (EU) No 648/2012 is amended as follows:
1. the following Chapter is added in Title IV:
Article 50a
Calculation of KCCP
1. For the purposes of Article 308 of Regulation (EU) No 575/2013 of the European Parliament and of the Council of 26 June 2013 on prudential requirements for credit institutions and investment firms(1), a CCP shall calculate KCCP as specified in paragraph 2 of this Article for all contracts and transactions it clears for all its clearing members falling within the coverage of the given default fund.
2. A CCP shall calculate the hypothetical capital (KCCP) as follows:
$${K _{\mathrm{CCP}}} = {\sum \max \{\mathrm{EBRM} _{i} - \mathrm{IM} _{i} - \mathrm{DF} _{i} ; 0\} \cdot \mathrm{RW} \cdot \text{captial ratio}}$$
where:
All values in the formula in the first subparagraph shall relate to the valuation at the end of the day before the margin called on the final margin call of that day is exchanged.
3. A CCP shall undertake the calculation required by paragraph 2 at least quarterly or more frequently where required by the competent authorities of those of its clearing members which are institutions.
4. For the purpose of paragraph 3, EBA shall develop draft implementing technical standards to specify the following:
1. the frequency and dates of the calculation laid down in paragraph 2;
2. the situations in which the competent authority of an institution acting as a clearing member may require higher frequencies of calculation and reporting than those referred to in point (a).
EBA shall submit those draft implementing technical standards to the Commission by 1 January 2014.
Power is conferred on the Commission to adopt the implementing technical standards referred to in the first subparagraph in accordance with Article 15 of Regulation (EU) No 1093/2010.
Article 50b
General rules for the calculation of KCCP
For the purposes of the calculation laid down in Article 50a(2), the following shall apply:
1. a CCP shall calculate the value of the exposures it has to its clearing members as follows:
1. for exposures arising from contracts and transactions listed in Article 301(1)(a) and (d) of Regulation (EU) No 575/2013 it shall calculate them in accordance with the mark-to-market method laid down in Article 274 thereof;
2. for exposures arising from contracts and transactions listed in Article 301(1)(b), (c) and (e) of Regulation (EU) No 575/2013 it shall calculate them in accordance with the Financial Collateral Comprehensive Method specified in Article 223 of that Regulation with supervisory volatility adjustments, specified in Articles 223 and 224 of that Regulation. The exception set out in point (a) of Article 285(3) of that Regulation, shall not apply;
3. for exposures arising from transactions not listed in Article 301(1) of Regulation (EU) No 575/2013 and which entails settlement risk only it shall calculate them in accordance with Part Three, Title V of that Regulation;
2. for institutions that fall under the scope of Regulation (EU) No 575/2013 the netting sets are the same as those defined in Part Three, Title II of that Regulation;
3. when calculating the values referred to in point (a), the CCP shall subtract from its exposures the collateral posted by its clearing members, appropriately reduced by the supervisory volatility adjustments in accordance with the Financial Collateral Comprehensive Method specified in Article 224 of Regulation (EU) No 575/2013;
4. where a CCP has exposures to one or more CCPs it shall treat any such exposures as if they were exposures to clearing members and include any margin or pre-funded contributions received from those CCPs in the calculation of KCCP;
5. where a CCP has in place a binding contractual arrangement with its clearing members that allows it to use all or part of the initial margin received from its clearing members as if they were pre-funded contributions, the CCP shall consider that initial margin as prefunded contributions for the purposes of the calculation in paragraph 1 and not as initial margin;
6. when applying the Mark-to-Market Method as set out in Article 274 of Regulation (EU) No 575/2013, a CCP shall replace the formula in point (c)(ii) of Article 298(1) of that Regulation with the following:
$${\mathrm{PCE} _{\mathrm{red}}} = {{0.15 \cdot \mathrm{PCE} _{\mathrm{gross}}} + {0.85 \cdot \mathrm{NGR} \cdot \mathrm{PCE} _{\mathrm{gross}}}}$$
where the numerator of NGR is calculated in accordance with Article 274(1) of that Regulation and just before the variation margin is actually exchanged at the end of the settlement period, and the denominator is gross replacement cost;
7. where a CCP cannot calculate the value of NGR as set out in point (c)(ii) of Article 298(1) of Regulation (EU) No 575/2013, it shall:
1. notify those of its clearing members which are institutions and their competent authorities about its inability to calculate NGR and the reasons why it is unable to carry out the calculation;
2. for a period of three months, it may use a value of NGR of 0,3 to perform the calculation of PCEred specified in point (h) of this Article;
8. where, at the end of the period specified in point (ii) of point (i), the CCP would still be unable to calculate the value of NGR, it shall do the following:
1. stop calculating KCCP;
2. notify those of its clearing members which are institutions and their competent authorities that it has stopped calculating KCCP;
9. for the purpose of calculating the potential future exposure for options and swaptions in accordance with the Mark-to-Market Method specified in Article 274 of Regulation (EU) No 575/2013, a CCP shall multiply the notional amount of the contract by the absolute value of the option's delta$$(\delta V / \delta p)$$as set out in point (a) of Article 280(1) of that Regulation;
10. where a CCP has more than one default fund, it shall carry out the calculation laid down in Article 50a(2) for each default fund separately.
Article 50c
Reporting of information
1. For the purposes of Article 308 of Regulation (EU) No 575/2013, a CCP shall report the following information to those of its clearing members which are institutions and to their competent authorities:
1. the hypothetical capital (KCCP);
2. the sum of pre-funded contributions (DFCM);
3. the amount of its pre-funded financial resources that it is required to use — by law or due to a contractual agreement with its clearing members — to cover its losses following the default of one or more of its clearing members before using the default fund contributions of the remaining clearing members (DFCCP);
4. the total number of its clearing members (N);
5. the concentration factor (β), as set out in Article 50d.
Where the CCP has more than one default fund, it shall report the information in the first subparagraph for each default fund separately.
2. The CCP shall notify those of its clearing members which are institutions at least quarterly or more frequently where required by the competent authorities of those clearing members.
3. EBA shall develop draft implementing technical standards to specify the following:
1. the uniform template for the purpose of the reporting specified in paragraph 1;
2. the frequency and dates of the reporting specified in paragraph 2;
3. the situations in which the competent authority of an institution acting as a clearing member may require higher frequencies of reporting than those referred to in point (b).
EBA shall submit those draft implementing technical standards to the Commission by 1 January 2014.
Power is conferred on the Commission to adopt the implementing technical standards referred to in the first subparagraph in accordance with Article 15 of Regulation (EU) No 1093/2010.
Article 50d
Calculation of specific items to be reported by the CCP
For the purposes of Article 50c, the following shall apply:
1. where the rules of a CCP provide that it use part or all of its financial resources in parallel to the pre-funded contributions of its clearing members in a manner that makes those resources equivalent to pre-funded contributions of a clearing member in terms of how they absorb the losses incurred by the CCP in the case of the default or insolvency of one or more of its clearing members, the CCP shall add the corresponding amount of those resources to DFCM;
2. where the rules of a CCP provide that it use part or all of its financial resources to cover its losses due to the default of one or more of its clearing members after it has depleted its default fund, but before it calls on the contractually committed contributions of its clearing members, the CCP shall add the corresponding amount of those additional financial resources $$(\mathrm{DF} _{\mathrm{CCP}} ^{a})$$to the total amount of pre-funded contributions (DF) as follows:
$${\mathrm{DF}} = {\mathrm{DF} _{\mathrm{CCP}} + \mathrm{DF} _{\mathrm{CM}} + {\mathrm{DF} _{\mathrm{CCP}} ^{a}}}$$.
3. a CCP shall calculate the concentration factor (β) in accordance with the following formula:
$${\beta} = {\frac{\mathrm{PCE} _{red,1} + \mathrm{PCE} _{red,2}}{\sum \mathrm{PCE} _{red,i}}}$$
where:
• PCEred,i = the reduced figure for potential future credit exposure for all contracts and transaction of a CCP with clearing member i;
• PCEred,1 = the reduced figure for potential future credit exposure for all contracts and transaction of a CCP with the clearing member that has the largest PCEred value;
• PCEred,2 = the reduced figure for potential future credit exposure for all contracts and transaction of a CCP with the clearing member that has the second largest PCEred value.
2. in Article 11(15), point (b) is deleted;
3. in Article 89, the following paragraph is inserted:
1. Until 15 months after the date of entry into force of the latest of the regulatory technical standards referred to in Articles 16, 25, 26, 29, 34, 41, 42, 44, 45, 47 and 49, or until a decision is made under Article 14 on the authorisation of the CCP, whichever is earlier, that CCP shall apply the treatment specified in the third subparagraph of this paragraph.
Until 15 months after the date of entry into force of the latest of the regulatory technical standards referred to in Articles 16, 26, 29, 34, 41, 42, 44, 45, 47 and 49, or until a decision is made under Article 25 on the recognition of the CCP, whichever is earlier, that CCP shall apply the treatment specified in the third subparagraph of this paragraph.
Until the deadlines defined in the first two subparagraphs of this paragraph, and subject to the fourth subparagraph of this paragraph, where a CCP neither has a default fund nor has in place a binding arrangement with its clearing members that allows it to use all or part of the initial margin received from its clearing members as if they were pre-funded contributions, the information it is to report in accordance with Article 50c(1) shall include the total amount of initial margin it has received from its clearing members.
The deadlines referred to in the first and second subparagraphs of this paragraph may be extended by six months in accordance with a Commission implementing act adopted pursuant to Article 497(3) of Regulation (EU) No 575/2013. .
|
# Difference between revisions of "Path-connected space"
This article defines a homotopy-invariant property of topological spaces, i.e. a property of homotopy classes of topological spaces
View other homotopy-invariant properties of topological spaces OR view all properties of topological spaces
This is a variation of connectedness. View other variations of connectedness
View a complete list of basic definitions in topology
## Definition
### Symbol-free definition
A topological space is said to be path-connected or arc-wise connected if given any two points on the topological space, there is a path (or an arc) starting at one point and ending at the other.
### Definition with symbols
A topological space $X$ is said to be path-connected if for any two points $a,b \in X$ there is a continuous map $\gamma:[0,1] \to X$ such that $\gamma(0) = a$ and $\gamma(1) = b$.
## Metaproperties
### Products
This property of topological spaces is closed under taking arbitrary products
View all properties of topological spaces closed under products
A direct product of path-connected spaces is path-connected. This is true both for finite and infinite direct products (using the product topology for infinite direct products).
### Coarsening
This property of topological spaces is preserved under coarsening, viz, if a set with a given topology has the property, the same set with a coarser topology also has the property
Shifting to a coarser topology preserves the property of being path-connected. This is because a path in a finer topology continues to remain a path in a coarser topology -- we simply compose with the identity map from the finer to the coarser topology (which, by definition, must be continuous).
A union of a family of path-connected subsets having nonempty intersection, is path-connected.
### Retract-hereditariness
This property of topological spaces is hereditary on retracts, viz if a space has the property, so does any retract of it
View all retract-hereditary properties of topological spaces
Any retract of a path-connected space is path-connected.
### Closure under continuous images
The image, via a continuous map, of a topological space having this property, also has this property
More generally, the image of a path-connected set under a continuous map is again path-connected.
## References
### Textbook references
• Topology (2nd edition) by James R. MunkresMore info, Page 155 (formal definition)
• Lecture Notes on Elementary Topology and Geometry (Undergraduate Texts in Mathematics) by I. M. Singer and J. A. ThorpeMore info, Page 52 (formal definition): introduced under name arcwise connected space
|
The library superb offers two main functionalities. First, it can be used to obtain plots with adjusted error bars. The main function is superbPlot() but you can also use superbShiny() for a graphical user interface requiring no programming nor scripting.
The purpose of superbPlot() is to provide a plot with summary statistics and correct error bars. With simple adjustments, the error bar are adjusted to the design (within or between), to the purpose (single or pair-wise differences), to the sampling method (simple randomized samples or cluster randomized samples) and to the population size (infinite or of a specific size). The superbData() function does not generate the plot but returns the summary statistics and the interval boundaries. These can afterwards be sent to other plotting environment.
The second functionality is to generate random datasets. The function GRD() is used to easily generate random data from any design (within or between) using any population distribution with any parameters, and with various effect sizes. GRD() is useful to test statistical procedures and plotting procedures such as superbPlot().
# Installation
The official CRAN version can be installed with
install.packages("superb")
library(superb)
The development version can be accessed through GitHub:
devtools::install_github("dcousin3/superb")
library(superb)
# Examples
The easiest is to use the graphical interface which can be launched with
superbShiny()
The following examples use the script-based commands.
Here is a simple example illustrating the ToothGrowth dataset of rats (in which the dependent variable is len) as a function of the dose of vitamin and the form of the vitamin supplements supp (pills or juice)
superbPlot(ToothGrowth,
BSFactors = c("dose","supp"),
variables = "len" )
In the above, the default summary statistic, the mean, is used. The error bars are, by default, the 95% confidence intervals. These two choices can be changed with the statistic and the errorbar arguments.
This second example explicitly indicates to display the median instead of the default mean summary statistics
superbPlot(ToothGrowth,
BSFactors = c("dose","supp"),
variables = "len",
statistic = "median")
As a third example, we illustrate the harmonic means hmedian along with 99.9% confidence intervals using lines:
superbPlot(ToothGrowth,
BSFactors = c("dose","supp"),
variables = "len",
statistic = "hmean",
errorbar = "CI", gamma = 0.999,
plotStyle = "line")
The second function, GRD(), can be used to generate random data from designs with various within- and between-subject factors. This example generates scores for 300 simulated participants in a 3 x 2 design with repeated-measures on Days. Only the factor Day is simulated to improve the scores by reducing it:
testdata <- GRD(
RenameDV = "score",
SubjectsPerGroup = 100,
BSFactors = "Difficulty(A,B,C)",
WSFactors = "Day(2)",
Population = list(mean = 75,stddev = 12,rho = 0.5),
Effects = list("Day" = slope(-3) )
)
head(testdata)
## id Difficulty score.1 score.2
## 1 1 A 87.50433 102.15152
## 2 2 A 60.07547 51.97630
## 3 3 A 75.34846 59.80454
## 4 4 A 60.93139 55.34069
## 5 5 A 81.91479 86.32453
## 6 6 A 83.31082 84.09887
The simulated scores are illustrated using using a more elaborated layout, the pointjitterviolin which, in addition to the mean and confidence interval, shows the raw data using jitter dots and the distribution using a violin plot:
superbPlot(testdata,
BSFactors = "Difficulty",
WSFactors = "Day(2)",
variables = c("score.1","score.2"),
plotStyle = "pointjitterviolin",
errorbarParams = list(color = "purple"),
pointParams = list( size = 3, color = "purple")
)
In the above example, optional arguments errorbarParams and pointParams are used to inject specifications in the error bars and the points respectively. When these arguments are used, they override the defaults from superbPlot().
# For more
As seen, the library superb makes it easy to illustrate summary statistics along with the error bars. Some layouts can be used to visualize additional characteristics of the raw data. Finally, the resulting appearance can be customized in various ways.
The complete documentation is available on this site.
A general introduction to the superb framework underlying this library is in press at Advances in Methods and Practices in Psychological Sciences (Cousineau, Goulet, & Harding, in press).
# References
Cousineau D, Goulet M, Harding B (2021). “Summary plots with adjusted error bars: The superb framework with an implementation in R.” Advances in Methods and Practices in Psychological Science, 2021, 1–46. doi: https://doi.org/10.1177/25152459211035109
Walker, J. A. L. (2021). “Summary plots with adjusted error bars (superb).” Youtube video, accessible here).
|
# mulitple file & folder progress bar
## Recommended Posts
Hi,
I'm sure loads of progress bars have been done in the past but I seem to struggle understanding the code and getting it to actually work
see below my code I have at the moment and I would like to attach a progress bar once copy files has been clicked o n the form, but not sure how to do it.
I even look at copyhere method but the wont be good if files already exist (I have only the option to overwrite rather than check last modified date)
hope this makes sense
#include <ButtonConstants.au3>
#include <GUIConstantsEx.au3>
#include <ProgressConstants.au3>
#include <StaticConstants.au3>
#include <WindowsConstants.au3>
#Region ### START Koda GUI section ### Form=H:\Autoit\Forms\epack.kxf
$Form1 = GUICreate("E-Pack Copy for Citrix", 223, 295, 192, 133) GUISetIcon("H:\Misc\mblogo.ico")$Label1 = GUICtrlCreateLabel("Select the servers to", 8, 8, 146, 20)
GUICtrlSetFont(-1, 10, 800, 0, "MS Sans Serif")
$Label2 = GUICtrlCreateLabel("copy the E-Pack files to:", 8, 25, 171, 20) GUICtrlSetFont(-1, 10, 800, 0, "MS Sans Serif")$Button1 = GUICtrlCreateButton("Select All", 128, 80, 89, 25, $WS_GROUP) GUICtrlSetTip(-1, "Select all Citrix servers")$Button2 = GUICtrlCreateButton("Clear All", 128, 112, 89, 25, $WS_GROUP) GUICtrlSetTip(-1, "Clear all selected Citrix servers")$Button3 = GUICtrlCreateButton("Copy Files", 128, 176, 89, 65, $WS_GROUP) GUICtrlSetTip(-1, "Copy EPack files to the selected servers")$Button4 = GUICtrlCreateButton("Exit", 128, 144, 89, 25, $WS_GROUP) GUICtrlSetTip(-1, "Live Citrix Presentation Server") GUICtrlSetCursor (-1, 0) GUICtrlSetTip(-1, "Live Citrix Presentation Server") GUICtrlSetCursor (-1, 0) GUICtrlSetTip(-1, "Live Citrix Presentation Server") GUICtrlSetCursor (-1, 0) GUICtrlSetTip(-1, "Live Citrix Presentation Server") GUICtrlSetCursor (-1, 0) GUICtrlSetTip(-1, "DR Citrix Presentation Server") GUICtrlSetCursor (-1, 0) GUICtrlSetTip(-1, "DR Citrix Presentation Server") GUICtrlSetCursor (-1, 0) GUICtrlSetTip(-1, "User Acceptance Testing Citrix Presentation Server") GUICtrlSetCursor (-1, 0) GUICtrlSetTip(-1, "Development Citrix Presentation Server") GUICtrlSetCursor (-1, 0)$Progress1 = GUICtrlCreateProgress(8, 256, 201, 25)
$Button5 = GUICtrlCreateButton("Please Read", 128, 48, 89, 25,$WS_GROUP)
$Label3 = GUICtrlCreateLabel("Copy Progress. . .", 8, 240, 87, 17) dim$checkbox[8]
$checkbox[0] = GUICtrlCreateCheckbox("LonCtxPS1", 8, 48, 81, 17)$checkbox[1] = GUICtrlCreateCheckbox("LonCtxPS2", 8, 72, 81, 17)
$checkbox[2] = GUICtrlCreateCheckbox("LonCtxPS3", 8, 96, 81, 17)$checkbox[3] = GUICtrlCreateCheckbox("LonCtxPS4", 8, 120, 81, 17)
$checkbox[4] = GUICtrlCreateCheckbox("PetCtxPS1", 8, 144, 81, 17)$checkbox[5] = GUICtrlCreateCheckbox("PetCtxPS2", 8, 168, 81, 17)
$checkbox[6] = GUICtrlCreateCheckbox("UATCtxPS1", 8, 192, 81, 17)$checkbox[7] = GUICtrlCreateCheckbox("DEVCtxPS1", 8, 216, 81, 17)
GUISetState(@SW_SHOW)
#EndRegion ### END Koda GUI section ###
While 1
$nMsg = GUIGetMsg() Select Case$nMsg = $GUI_EVENT_CLOSE Exit Case$nMsg = $Button1 ;Check All For$n = 0 To UBound($checkbox) - 1 ;For$n = 0 To 7 Step 1
GUICtrlSetState ( $checkbox[$n] , $GUI_CHECKED ) Next case$nmsg = $Button2 ;Clear All For$n = 0 To UBound($checkbox) - 1 ;For$n = 0 To 7 Step 1
GUICtrlSetState ( $checkbox[$n] , $GUI_UNCHECKED ) Next case$nmsg = $Button3 ;Copy Files$fSelection = False
For $n = 0 To UBound($checkbox) - 1
If BitAND(GUICtrlRead($checkbox[$n]), $GUI_CHECKED) Then$fSelection = True
ExitLoop
EndIf
Next
If $fSelection Then$answer = msgBox(36, 'Copy Files to Citrix servers', 'Are you sure you want to copy E-Packs files to the selected servers?')
if $answer = 6 then$status1 = GUICtrlRead($checkbox[0])$status2 = GUICtrlRead($checkbox[1])$status3 = GUICtrlRead($checkbox[2])$status4 = GUICtrlRead($checkbox[3])$status5 = GUICtrlRead($checkbox[4])$status6 = GUICtrlRead($checkbox[5])$status7 = GUICtrlRead($checkbox[6])$status8 = GUICtrlRead($checkbox[7]) If$status1 = 1 then call ("o1copy")
If $status2 = 1 then call ("o2copy") If$status3 = 1 then call ("o3copy")
If $status4 = 1 then call ("o4copy") If$status5 = 1 then call ("o5copy")
If $status6 = 1 then call ("o6copy") endif if$answer = 7 Then
EndIf
Else
MsgBox(48, 'No Items Selected', 'You have not selected any Citrix servers to copy E-Pack files to, Please select from the list')
EndIf
case $nmsg =$Button4
Exit
case $nmsg =$Button5
MsgBox(64, 'Please Read. . .', '1. You need to be logged in with your admin account on the network.' & @CRLF & @CRLF & '2. Epack Files need to be located in'& @CRLF & ' \\ad.saffery.net\saffery\IT\Rollout\Software\CaseWare\Updates')
EndSelect
WEnd
;Functions for copying files
Func o1copy()
DirCopy("C:\Temp\test\1", "C:\Temp\end", 1)
EndFunc
Func o2copy()
DirCopy("C:\Temp\test\2", "C:\Temp\end", 1)
EndFunc
Func o3copy()
DirCopy("C:\Temp\test\3", "C:\Temp\end", 1)
EndFunc
Func o4copy()
DirCopy("C:\Temp\test\4", "C:\Temp\end", 1)
EndFunc
Func o5copy()
DirCopy("C:\Temp\test\5", "C:\Temp\end", 1)
EndFunc
Func o6copy()
DirCopy("C:\Temp\test\6", "C:\Temp\end", 1)
EndFunc
##### Share on other sites
In the loop where you are testing if anything is checked, don't exit the loop. Check them all and add up either a file count or the file sizes for a simple naive progress calculation.
Valuater's AutoIt 1-2-3, Class... Is now in Session!For those who want somebody to write the script for them: RentACoder"Any technology distinguishable from magic is insufficiently advanced." -- Geek's corollary to Clarke's law
##### Share on other sites
In the loop where you are testing if anything is checked, don't exit the loop. Check them all and add up either a file count or the file sizes for a simple naive progress calculation.
Sorry to sound stupid, but could someone show me how it is done, I dont seem to fully understand this yet, (I'm trying tho!!)
many thanks
##### Share on other sites
Something like this:
; ...
; Array of file names to match up with checkboxes
Global $aFileNames[6] = ["C:\Temp\test\1", "C:\Temp\test\2", "C:\Temp\test\3", _ "C:\Temp\test\4", "C:\Temp\test\5", "C:\Temp\test\6"] ; Case$nMsg = $Button3 ;Copy Files$iSelectionCount = 0
$iSelectionSize = 0 For$n = 0 To UBound($checkbox) - 1 If BitAND(GUICtrlRead($checkbox[$n]),$GUI_CHECKED) Then
$iSelectionCount += 1$iSelectionSize += FileGetSize($aFileNames[$n])
EndIf
Next
If $iSelectionSize Then$answer = MsgBox(36, 'Copy Files to Citrix servers', _
'Are you sure you want to copy ' & $iSelectionCount & ' E-Packs files' & @CRLF & _ '(' &$iSelectionSize & ' bytes total) to the selected servers?')
If $answer = 6 Then$iFileNum = 0
ProgressOn("Copying E-Packs", "Copying " & $iSelectionCount & " files", "0% Complete") For$n = 0 To UBound($checkbox) - 1 If BitAND(GUICtrlRead($checkbox[$n]),$GUI_CHECKED) Then
$iFileNum += 1 ProgressSet(($iFileNum - 1) / $iSelectionCount * 100, "Copying " &$iFileNum & " of " & $iSelectionCount) _Copy($aFileNames[$n]) ProgressSet($iFileNum / $iSelectionCount * 100) EndIf Next EndIf Else MsgBox(48, 'No Items Selected', 'You have not selected any ...') EndIf ; ... ;Functions for copying files Func _copy($sFile)
DirCopy(\$sFile, "C:\Temp\end", 1)
EndFunc ;==>_copy
This uses an array of file names that match up with the array of checkboxes to avoid the goofy bunch of Call()s you had.
The percentage in the progress bar here is based on simple file count. You can update it to use file sizes copied / total file size.
Note that the progress is not updated during each actual file copy, just before/after each copy. There are more advanced techniques to use a File System Object (FSO) and get a file copy with progress. Search for examples on the forum if you fell ready to go there.
Valuater's AutoIt 1-2-3, Class... Is now in Session!For those who want somebody to write the script for them: RentACoder"Any technology distinguishable from magic is insufficiently advanced." -- Geek's corollary to Clarke's law
## Create an account
Register a new account
|
scroll identifier for mobile
main-content
## Journal of Applied Mathematics and Computing OnlineFirst articles
23.05.2019 | Original Research
### A robust numerical method for pricing American options under Kou’s jump-diffusion models based on penalty method
We develop a novel numerical method for pricing American options under Kou’s jump-diffusion model which governed by a partial integro-differential complementarity problem (PIDCP). By using a penalty approach, the PIDCP results in a nonlinear …
17.05.2019 | Original Research
### Connectivity and edge-bipancyclicity of Hamming shell
A graph obtained by deleting a Hamming code of length $$n= 2^r - 1$$ n = 2 r - 1 from a n-cube $$Q_n$$ Q n is called as a Hamming shell. It is well known that a Hamming shell is regular, vertex-transitive, edge-transitive and distance preserving …
16.05.2019 | Original Research
### Convergence of numerical schemes for the solution of partial integro-differential equations used in heat transfer
Integro-differential equations play an important role in may physical phenomena. For instance, it appears in fields like fluid dynamics, biological models and chemical kinetics. One of the most important physical applications is the heat transfer …
11.05.2019 | Original Research
### Finite difference and spectral collocation methods for the solution of semilinear time fractional convection-reaction-diffusion equations with time delay
In this paper, an efficient numerical method is constructed to solve the nonlinear fractional convection-reaction-diffusion equations with time delay. Firstly, we discretize the time fractional derivative with a second order finite difference …
09.05.2019 | Original Research
### Ground state sign-changing solutions for a class of nonlinear fractional Schrödinger–Poisson system with potential vanishing at infinity
In this paper, we study the following nonlinear fractional Schrödinger–Poisson system 0.1 \begin{aligned} \left\{ \begin{array}{ll} (-\Delta )^{s}u+V(x)u+\phi (x)u=K(x)f(u),&{}x\in \mathbb {R}^{3},\\ (-\Delta )^{t} \phi =u^{2},&{}x\in \mathbb …
## Aktuelle Ausgaben
### Über diese Zeitschrift
Journal of Applied Mathematics and Computing (JAMC) is a broad-based journal covering all branches of computational or applied mathematics with special encouragement to researchers in theoretical computer science and mathematical computing. It covers all major areas, such as numerical analysis, discrete optimization, linear and nonlinear programming, theory of computation, control theory, theory of algorithms, computational logic, applied combinatorics, coding theory, cryptograhics, fuzzy theory with applications, differential equations with applications.
JAMC features research papers in all branches of mathematics which have some bearing on the application to scientific problems, including areas of actuarial science, mathematical biology, mathematical economics and finance.
Weitere Informationen
|
• anonymous
Rationalize the numerator of the expression. Then simplify your answer. (√7 – 3)/5. my answer is incorrect. i have -2/5(√7+3)
Mathematics
Looking for something else?
Not the answer you are looking for? Search for more explanations.
|
undefined symbol. Undefined Symbol Theater is very proud to announce that the Playwrights Center of San Francisco has partnered with us to produce Singulariteen at the San Francisco Fringe 2013! This is a very exciting development and it will enable us to produce the best show possible. Symbols can also remain undefined when a symbol reference in a relocatable object is bound to a symbol definition in an implicitly defined shared object. Hello, I tried to build a program based on the Catalyst, the paraview and my program is ok to compile, but I got this error when I run the program:. Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'Symbol(immer-state)') heroku redux. We have removed some processor-independent functionality from processor-specific libraries. Some extensions, though, define the following conventions of addition and multiplication:. TensorRT import - Undefined symbol. For instance, suppose we have the definition of convergence of sequence. This is a frequent mistake, already discussed several times in this forum. While researching further it appears the issue might be specific to PHP version series 5. Undefined first referenced symbol in file vector::insert_aux(DPEEDFU*, const DPEEDFU&). This error means there is a mismatch between the byte size of Unicode characters used by the environment . hello guys i'm trying to build this unity game to Xcode but I'm facing this problem. The undefined symbols for architecture x86_64: can be easily fixed by defining a value inside the missing declared statements. This problem can have any of the following causes: Cause 1. Generating an Executable Output File When generating an executable output file, the link-editor's default behavior is to terminate with an appropriate error message should any symbols remain undefined. x86_64 Additional info: I checked version of this library for Fedora31 (krb5. error #10234-D: unresolved symbols remain. That means the c++ std::string abi doesn’t match between building pytorch source and building cpp extensions. trying test code which uses app_easy_timer and im getting the following compile error Error: L6218E: Undefined symbol app_easy_timer (referred from . Here is a brief overview of what I'm. You need to be using the dllexport keyword to declare a function to be exported. 3: undefined symbol: EVP_KDF_ctrl, version OPENSSL_1_1_1b" Version-Release number of selected component (if applicable): krb5-libs-1. Answer: There is no symbol that literally means “undefined” as far as I know. This can be resolved by forcing liblua. Also, put the code in php tag next time _____. And here is the output from running the command that Zirias provided. client import FirewallClient, FirewallClientIPSetSettings, \\ File. Its caused by _GLIBCXX_USE_CXX11_ABI=1 when compile pytorch from source. For example, you would say: __declspec (dllexport) int __cdecl sum (int a, int b); Refer to the Microsoft Exporting from a DLL Using __declspec (dllexport) document. Thanks Axel, i tried a simple case to load one libhello. PROBLEM: _main is an undefined symbol. The symbol has no well-defined meaning by itself, but an expression like {} → is shorthand for a divergent sequence, which at some point is eventually larger than any given real number. (1) 使用file 命令查看 so库的架构,看看是否与平台一致. What is meant by undefined symbol error in C++? It means you haven't written a function, or you haven't created a variable, or you haven't . You want to use try/catch exception handling so you must use c++ not c. ) add using namespace std; before the definition of main. Hi goldwake, I have been having the same issue as you I think. Assuming a number is positive, dividing it by a very small positive number that approaches zero. For what it's worth, I also am unaware of any such symbol, at least anything which is generally accepted by the math community. ld: 0711-317 ERROR: Undefined symbol:. I'm trying to install TensorRT 5 from the tar file on Ubuntu 16. How does this relate to How to upload two. error #10010: errors encountered during linking; "testing-VecN-library. I created a basic spec file for my NavigationStack component. Then he just extracted TBB into /usr/local/ which was where even I had extracted my TBB library and the shared library was able to detect TBB's shared libraries there and it worked!. thanhkien84 opened this issue Nov 26, 2018 · 3 comments Labels. I've finally managed to compile my V3 project in V4, . 1 Successfully installed keras-2. Once I pip installed the wheels file, I tried import tensorrt in a python shell, but get Traceback (most recent call la…. const char *foo = InvalidateImage (bar); Because it is a library, you would not notice this until you attempt to run the program which uses this symbol. Syntax --undefined=symbol Usage Causes the linker to: Create a symbol reference to the specified symbol name. 104-2 Perl wrappers for cairo local/cairo-ubuntu 1. Undefined symbol _file_set first referenced in file /usr/ucblib/libucb. What is the cause of the "undefined symbol. [Bug ld/29086] -Wl,--wrap=foo with LTO leads to undefined symbol, cvs-commit at gcc dot gnu. This default behavior allows the shared object to import symbols from either relocatable objects or from other shared objects when the object is used to create a dynamic executable. EDIT (PARTIAL) SOLUTION!!!!: After following these steps of the link below I managed to fix the scipy error, I uninstalled numpy, scipy and scikit-learn. It was added in a newer version of GAUSS than you have. Because DrawCar is not a part of the Main function! Nor can it be Move your public variables out of the main method and into the main body of the class. It's a program processing messages from an external system and seemed to fail only when the processing of the message triggered some given operations. 0 update 3, these symbols are *Undefined* in the libraries libmkl_def. Look at the library dependencies using the ldd tool. Undefined symbol error, but it's in the static library : r/C_Programming. 10 | grep -i CRYPTO_set_locking_callback. In my situation (Debian 9) the order was correct: the php-mysqlnd. • You use the report field in a calculated field, and then you remove the report field. The symbols of infinity In analysis, measure theory and other mathematical disciplines, the symbol is frequently used to denote an infinite pseudo-number, along with its negative,. When the link-editor is generating a shared object output file, it allows undefined symbols to remain at the end of the link-edit. error: Undefined symbol '_DisplayError' referenced in "c:\Users\i87278\Desktop\OCR exp\cvibuild. Moschops (7244) It means you haven't written a function, or you haven't created a variable, or you haven't linked against the library or object code that contains the missing function or variable. I have already created a static library that works, but when I test my dynamic library, I get an unresolved symbol. 1 on CentOS 8 and was having immense stability problems. For c++ you need to change these things. Accept Solution Reject Solution. After all input files have been read and all symbol resolution is complete, the link-editor searches the internal symbol table for any . For example, if you try to calculate the value of pi using an infinite series, you will eventually reach a point where the value is undefined. LI72305: UNDEFINED SYMBOLS FOR TEMPLATE FUNCTIONS. so | grep pthread_setname_np 000000000000e0f0 w DF. o I am migrating the code to Sun 5. [ 93%] Linking C executable Test_BitIO Undefined symbols . undefined symbol hello guys, i have just installed mplab. I don't know whether you may run into a problem when you append all those libraries to LD_LIBRARY_PATH rather than putting the needed additional ones ahead of the default ones. By default putch () is defined as an empty function. conf says: LoadModule php5_module modules/libphp5. Undefined, a variable lacking initialization. By disabling cookies, some features of the site will not work. Hello, I am currently working on a program that does real-time facial recognition using a raspberry pi v2 camera module. I am experimenting an issue with my Keil project. If it's different, please provide the info that Wile_E asked for. Dec 19, 2007 3:46PM edited Dec 20, 2007 12:48PM in Developer Studio C/C++/Fortran Compilers. so 807578 Member Posts: 13,959 Green Ribbon Jan 8, 2009 5:14AM edited Jan 13, 2009 1:36PM in Developer Studio C/C++/Fortran Compilers. If you have a project file loaded, make sure your source code file (. cpp Syntax to Your Document - Code Example 3 - Adding the Similarity. When the library should be loaded the system raised an UnsatisfiedLinkException withe the message: "undefined symbol: __dso_handle". h Syntax to Your Document - Code Example 4. lgi/Makefile has to be edited like below. Undefined symbols and where to find them. Add a comment | Sorted by: Reset to default. mexa64': Undefined symbol: mxErrMsgTxt" It is my understanding that mxErrMsgTxt is legacy code and shouldn't be used. Understanding "Undefined Symbol" Error Messages. What is the maths symbol for undefined?. Hi I'm trying to use prof and so have to use the static version of our libraries. error: undefined symbol: stdout. 0018212: cmake: undefined symbol: archive_write_add_filter_zstd. It means the linker has looked through all the compiled code you told it to, and it still can't find what it's looking for. The official dedicated python forum. Inside the jail, I get no return, but from host, I get the following: Code: [email protected]:~ # objdump -TC /usr/lib/libpthread. so: undefined symbol: _ZN12ninebot_algo10AprAlgoLog9instance_E. How to remove common errors in TurboC compiler || C++ graphics || Linker error || undefined symbol. qt creator symbols not found for architecture x86 64 Undefined symbols for architecture x86 64 | Compiling CPP c++ programs in MAC or . 3 to manage virtual environment and python versions. I'm developing a DLL in LabWindows/CVI 2012. undefined symbol: __ZN2at19UndefinedTensorImpl10. 45] ifcfg-rh: dbus: couldn't initialize system bus: Could not connect: Connection refused dbus is showing symbol lookup errors Traceback (most recent call last): File "/usr/bin/firewall-cmd", line 31, in from firewall. 0 configuration and merge the differences between 8. what is an undefined symbol error? Last edited on Dec 18, 2011 at 1:58pm. LIB + MATH (S,T,C,M,L - for model) + C (S,T,C,M,L - for model) Undefined Symbol Linking "C. org, 2022/05/04; Prev by Date:. Change your LinkWith attribute to: [assembly: LinkWith (, ForceLoad = false, SmartLink = true)] This will make Xamarin. I hardly know what Qt_5 is for except that it is an IDE. gss, line 12] In this example, we can tell that arimaFit is a GAUSS procedure or function. • You change the field type to a type other than data. It means you haven't written a function, or you haven't created a variable, or you haven't linked against the library or object code that contains the missing function or variable. The program is actually a library that is loaded by another program. Resolving Undefined Symbol linker messages. Written by Embarcadero USA on Thursday, 2 July 1998 . Copy link thanhkien84 commented Nov 26, 2018. a, just reverse the sequence: lib2. For c++ you need to change these things 1. gareth July 29, 2020, 9:22pm #1. Undefined Behaviour in C and C++; What are common programming errors or 'gotchas' in Python? What is the Symbol Table? What is a reference variable in C++? What are all the common undefined behaviours that a C++ programmer should know about? Difference between Compile Time Errors and Runtime Errors in C Program. The following list provides solutions to some of the more common causes of "undefined symbol" errors: Undefined Symbol When TLINKing from DOS Command Line ===== The TLINK command line must have the libraries in the following order ( GRAPHICS. OS : Windows 7 64 bit Laz: Lazarus 1. This is usually caused by a misspelled identifier name, or missing declaration of the identifier used. " In math, this term refers to a value that is not assigned to any specific number. Undefined symbols can affect the link-edit process according to the type of symbol, together with the type of output file being generated. cpp file) which has main in it is listed in the. According to Math - Symbol for Undefined, dividing a number by 0 may be represented by UNDEF, but the staff are unaware of any specific symbol meaning "undefined". This is why you are getting the undefined symbol errors since CVI can't find the exported functions. do you have any idea of this problem. The program was working but I was having latency issues and read that I. yum doesn't work: yum search pycurl This problem occurred: /usr/lib64/python2. This site uses cookies to store information on your computer. UNDEFINED SYMBOL AT COMPILE TIME An undefined symbol at compile time indicates that the named identifier was used in the named source file, but had no definition in the source file. This is the first time I try to upgrade to 2. Deep Learning (Training & Inference) TensorRT. The symbols i_malloc, i_free, are defined in libmkl_core. iOS ask the native linker to remove unused code from the native library, and if you're lucky, the reference to the inexistent method will be in unused code, so it will end up removed. undefined symbol: g_unicode_script_get_type. Re: :0: error: (499) undefined symbol: Monday, August 24, 2015 6:17 PM ( permalink ) +4 (4) Your code cannot compile without a "main" function, so we are guessing here with half of your project. What causes the undefined symbol. The error Undefined symbols for architecture arm64: "_OBJC_CLASS_$_SKAdImpression" during the iOS build . what is an undefined symbol error? It means you haven't written a function, or you haven't created a variable, or you haven't linked against . so: undefined symbol: CRYPTO_num_locks . There are two way to solve this problem: build cpp extensions with -D_GLIBCXX_USE_CXX11_ABI=1. Good judgement is the result of experience … Experience is the result of bad judgement. After all of the input files have been read and all symbol resolution is complete, the link-editor searches the internal symbol table for any symbol references . Dividing by zero is not considered infinity (∞), it is UNDEF. I'm trying to set up a HelloWorld subscriber on a VxWorks 7 system but I am getting a number of undefined symbols. Bug 714140 - undefined symbol: Perl_Gthr_key_ptr. I want to post a simple solution to a problem I had with a dynamic linked library which was coded in c++. The symbol has no well-defined meaning by itself, but an expression like. 'Undefined symbols for architecture' on iOS I've just started out adding Fabric support to my Navigation router. Import Error: undefined symbol: png_riffle_palette_neon. Last edited on Dec 18, 2011 at 12:22pm. Error string: Could not load library (Poco exception = /home/marco/catkin_ws/devel/lib//librqt_template_plugin. Undefined Symbol F01343D00789 Sequence Num. Free Pre-Algebra, Algebra, Trigonometry, Calculus, Geometry, Statistics and Chemistry calculators step-by-step. These seem typically to include -lm -lc -lgcc -lgcc_s -ldl -lc (libc linked twice). these errors are inside a custom library. 1 According to Math - Symbol for Undefined, dividing a number by 0 may be represented by UNDEF, but the staff are unaware of any specific symbol meaning "undefined". The symbols of infinity [ edit ] In analysis , measure theory and other mathematical disciplines, the symbol ∞ {\displaystyle \infty } is frequently used to denote an infinite pseudo-number, along with its negative, − ∞ {\displaystyle -\infty }. Undefined, a function or variable lacking a declaration. If a sequence$(a_n)_{n=m}^\infty$is not converging to any real number, we say that the sequence$(a_n)_{n=m}^\infty$is divergent and we leave$\lim_{n\to\infty}a_n$undefined. And i want to try a simple program: #include #include #pragma config WDT = OFF void main (void) {printf("hello world"); while(1);} if i want to build the program i get the following error: Build C:\Documents and Settings\ruud\My Documents\probeer for device 16F690. Turned out that RHEL had TBB installed by default and my colleague had to delete the TBB files in /usr/include and /usr/lib64/. After all of the input files have been read and all symbol resolution is complete, the link-editor searches the internal symbol table for any symbol references that have not been bound to symbol definitions. Undefined Symbols (Linker and Libraries Guide). Resolving "undefined symbol __Vectors' referenced in expression" error. Tags: Posted by JohanEkdahl: Tue. Undefined behavior, computer code whose behavior is not specified under certain conditions. M A T H E M A T I C S · Symbols used in Set Theory Math Classroom, Math Teacher, Teaching Math, Math Vocabulary. 3 When trying to execute previously well-functioning keras/tens. Marking as WontFix since this is an issue that should be fixed in Chromium, and it does not impact the CEF automated builders (config here). G0025 : Undefined symbol: 'arimaFit' [arimafit-example. 0 for quite some time and decided to update to 1. I am getting following errors when compiling source code in Keil-5. channelFoam: symbol lookup error:/OpenFOAM/OpenFOAM-2. 4-4 A rendering library for Kate streams using Pango and Cairo local/pixman 0. undefined symbol: _ZN12ninebot_algo10AprAlgoLog9instance_E. This problem occurs if the following conditions are true: • You put a field in a footer or in the report body. Description of problem: After upgrade to Fedora 32, Matlab 2020a complain about: "symbol lookup error: /lib64/libk5crypto. I'm working on my Cmake build scripts and in the process I've found a weird error. Undefined, an unavailable linker symbol. I’m trying to install TensorRT 5 from the tar file on Ubuntu 16. To do this, I am using the BFD library. "error: undefined symbol: _ZdlPvm, version Qt_5". 0/platforms/linux64GccDPOpt/lib/libincompressibleLESModels. a, it is too late -- lib1 has already been searched. Undefined symbol in dynamic library. Undefined symbol: _sampleTextMethod. Dividing a number by zero is usually considered undefined. means that you failed to include all the libraries you need in your cc. CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900. txt the symbol shows up in the file, but has no address associated with it. h Code of the Complete Syntax - Code Example 2 - Adding the Similarity. Stack Exchange network consists of 180 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. or "undefined symbol: PyUnicodeUCS2\_" errors. SOLUTION: main is the entry point for every C/C++ program. ini file was at the top of the list of modules. Repeat this for every Undefined symbol in the linker error message, and you'll be . Resolving "undefined symbol __Vectors' referenced in expression" error But this did not solve the problem! In the explorer project accidentally found that in the folder: CMSIS, in this folder another folder EFM32GG and it has two files (startup_gcc_efm32gg. Assuming a number is positive, dividing it by a very small positive number that approaches zero would yield +∞. We are grateful to PCSF for their support of new works. If is not in the domain of , then this is written as () ↑, and is read as "() is undefined". This is an issue reported earlier and it remains for the following versions of tensorflow and keras: Successfully installed tensorflow-2. dbus is timing out and processes are failing to connect Jan 1 01:00:00 hostname NetworkManager[1534]: [546. To change the modules order just change the trailing number so you should have "10-mysqlnd. For example, continuing with the files main. i have 2 separate small projects i tried to compile, one just be trying to learn how to use the xml library, and the other is a multi file c program just for fun, both compile fine with only some small warnings, the usually stuff like "inter to pointer. Iam attempting a script to return the current cursor position using the getyc macro I have #included the curses. so which are depended on each other?If it's the same: please don't open two topics on the same - well, topic, at the same time. h however on compilation (with gcc) it errors with Undefined symbol. If the command line has the sequence lib1. c) but they are empty (zero length)!. unable to load undefined symbol _z15InvalidateImageSs I am trying to determine why I am getting this error. Undefined symbol error after compiling a new LES model. c(4): Undefined symbol _output_word in module ccode Warn : :Program has no entry point both files are compiled and assembled under one project and under. Functions are *not* defined in header files, but only declared, . The issue must be from some incompatibility between Qt_5 in my computer and the one that runs with Coppeliasim, but i don't know in what way as this is all new to me. 1: /bin/sh: Undefined symbol "[email protected]_1. Answer (1 of 5): There is no generally accepted symbol for "undefined. APAR is sysrouted FROM one or more of the following: APAR is sysrouted TO one or more of the following: Fix information. mustaqimM reacted with thumbs up emoji. Normally it's preferrable to start with a 9. 0-3 C++ bindings to Cairo vector graphics library local/libtiger 0. This is a common problem that many web developers come across while working with several web utilities for designing programs. - John Omielan Dec 23, 2018 at 3:03. On some analysis using nm , I found that the symbols which are flagged as UNDEFINED are in fact getting defined in the temporary objects created in the SunWS. If that undefined name is not referenced in your source, then it usually. so" fails: undefined symbol: sqlite3_libversion The only solution I am aware of at the present time is to either disable the PDO SQLite3 extension, by editing the system PHP configuration file "php. Summary: undefined symbol: Perl_Gthr_key_ptr Keywords: Status: CLOSED RAWHIDE Alias: None Product: Fedora Classification: Fedora Component: perl-Compress-Raw-Zlib Sub Component: Version: rawhide Hardware: Unspecified OS: Unspecified. Static libraries are searched in order for symbols that have been referenced but not yet defined. o is compiled into an object file everybit BUT there is no reference in this line to everybit hence the undefined symbols gcc -o everybit_harvey main. 5 configuration file that has not been fully migrated. o: undefined reference to symbol '[email protected]@GLIBC_2. Perhaps you have such a long path that it doesn't find all the necessary ones. After creating a Dynamic Link Library project in Debug configuration, I set the Build»Target Type to Static Library. Error: L6218E - undefined symbol. so LIBFLAG = -shared CCSHARED = -fPIC LIBS +=$ (shell $(PKG_CONFIG) --libs lua) endif endif. error: undefined symbol: stderr. There is no symbol that literally means “undefined” as far as I know. When running the SAP HANA hardware and cloud measurement tools as adm user a similar error is thrown: ". Learn more about pango, matlabwindow, linux, symbol lookup error MATLAB. This showed up in a nightly CI job whose configuration has not changed.$ file file: symbol lookup error: file: undefined symbol: magic_setparam $which file /usr/local/bin/file$ ldd $(which file) linux-vdso. whenever i try to compile a c program, the linker gives that "undefined symbol" stuff. Eventually got to the logs and found a lot of messages like this: Service 'indexer' exited with status 127. undefined symbol "cout", "cin", and "endl". 807578 Member Posts: 13,959 Green Ribbon. ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more. user24036 October 31, 2021, 9:08am #1. You have to remove local library and local Git, then reinstall Git with dependencies by sudo rm -v$(which git) sudo rm -v . Warning: module 0xffff8000006ffaf0 holds reference to undefined symbol _ZNSt6localeC1Ev. ) replace #include with #include. I am creating a dynamic library of object files that contain functions to emulate the behavior of objdump and nm. a does not need any symbols from lib2. "undefined symbol" when trying to compile c?. Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use. Using apt-get install I installed the following things: apt-get install libopenblas-base apt-get install libopenblas-dev apt-get install python-dev apt-get install gcc apt-get install gfortran. Answers (1) This happens when you use an 8. What is the cause of the "undefined symbol: amdgpu_bo_list_create_raw" error on RHEL8 VMware Guest?. build pytorch with -D_GLIBCXX_USE_CXX11_ABI=0. Example: if the library calls '_nmalloc' that function does not exist in our libraries (we supply new header files in Borland C++. SOLUTION: undefined symbol: __dso_handle — oracle. There is no symbol that literally means "undefined" as far as I know. “Undefined symbol: protocol descriptor for Swift. The first 3 minutes are automatically charged: \$2. I am using the STM32CubeF7 HAL to programa a module I need. By continuing to use our site, you consent to our cookies. Error is coming because you are trying to access class member variable x, y, z from outside in the main() where x, y, z is not declared. Usually one states "we leave the statement undefined". Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have. undefined symbol: X means always that X should be exported from one of loaded libraries, but it's not. ExpressibleByFloatLiteral” Code Answer // You need to exclude architecture arm64 if you are . 16-2 Cairo vector graphics library, with Ubuntu's LCD rendering patches local/cairomm 1. In most cases output is directed to the serial port. Issue an implicit --keep=symbol to prevent any . Any one suggest me what i am missing to define symbol?. The meaning of the message is that the name you are using as an operand on the RCALL instruction cant be found by the assembler. As this is more or less obvious then here's some advice/tips to work on: Double check the spelling. undefined symbol: __ZN2at19UndefinedTensorImpl10_singletonE #370. 3 as per the following PHP bug report: PHP :: Bug #48614 :: Loading "pdo_sqlite. Warning: module 0xffff8000006ffaf0 holds reference to undefined symbol _ZNSt6_MutexD1Ev. Use Math Input Mode to directly enter textbook math notation. These symbol references are referred to as undefined symbols. function is running before my redux state is available, TypeError: Cannot read properties of undefined (reading 'find') 0 Cannot read properties of undefined (reading 'map') in react js while using redux. 5" here I cannot find path, because every thing get error, I cannot use ls or find file and folders or any other command in my /rescue/sh area. Closed rmrao opened this issue Jun 21, 2019 · 3 comments Closed undefined symbol: __ZN2at19UndefinedTensorImpl10_singletonE #370. CentOS 8: undefined symbol: EVP_KDF_ctrl in libcrypto. This is default behaviour on linux systems by Makefile. Performing standard arithmetic operations with the symbols is undefined. If you are not happy with the use of these cookies, please review our Cookie Policy to learn how they can be disabled. However, in the code I downloaded, it's already commented out and replaced by mexErrMsgIdAndTxt. ephore October 23, 2018, 8:37am #1. When I try to build my project after changing the Target Type, LabWindows/CVI throws several link errors: Undefined symbol'__CompiledDebuggingLevel' referenced in "C:\Program Files\National Instruments\CVI2012\bin\msvc\cvistart. my problem is "undefined symbol try" how can I handle this Jan 7 '09 #8. so is loaded, while i can’t call the function defined in the libhello. Make sure you write a function called main (all lowercase) in your program. Make a new virtual environment with pyenv called XXXX. Once I pip installed the wheels file, I tried. Trying to execute CMake gives the error: cmake: symbol lookup error: cmake: undefined symbol: archive_write_add_filter_zstd. 即 symbol lookup error: libpathplan. o -arch x86_64 -framework CoreServices Undefined symbols: "_bitarray_reverse", referenced. undefined symbol: _ZTIN10tensorflow8OpKernelE_ #32. I am trying to compile a LabWindows/CVI project, but I am getting errors that are referenced as Link Error or Undefined Symbol errors. Dividing by a very small negative number that approaches zero would yield -∞. By other hand, in Computer science there are some symbols: undefined, null and NaN. Pressed CTRL+H to find and replaced PM_Payment_WORK to pmRemittanceTemp. Undefined value, a condition where an expression does not have a correct value. Regenerated the report and found 2 missing fields, replaced the missing fields with an alternative from the. ImportError: /home/pybind11_example. Thank you, but I can't find where is the problem, I deleted everything from proj\lb\i386-win32 and at lazarus I Run "run / clean - build -> right corner, delete" and then build. gercurx Any ideas where I can find a solution or what I've missed (7 Replies). 6-1 The pixel-manipulation library for X and cairo local/python-cairo 1. At a customer site, we had a C++ program (renamed for the purpose of this blog to myprogram) which was failing after running for some time. Ultimately some of the linker warnings can be ignored (for any observer sets that are not being used by your code), but, if the sections and section references aren't being linked correctly, then ignoring them will be ignoring a real problem. 2 & using -compat=4 for compilation. You should find out in which library requested symbol . Those are generated freshly by the build process but haven't been used for years. However there is always a workaround! Below the steps I followed: I exported the report to a package file. That said, array identifiers ARE pointers, but the compiler is nice about it so doing ptrPrintVal = &printVal; or doing ptrPrintVal. It's the “Undefined symbols for architecture x” error. Your problem is not being sure which language you are coding in. There are four main reasons why a procedure or function might be undefined in your GAUSS code: 1. I have problem with error 017: undefined symbol. Undefined Symbol is in a Microsoft Library ===== 1) If the libraries call functions that exist in the MSC RTL (Runtime Library) but do not exist in our library, you will get 'undefined symbol' errors during link. The printf () function calls another function putch () to output the character after formatting. ImportError: /home/pybind11_example. It is often represented with the word UNDEF. 0 with Service Pack 4 to Microsoft Dynamics Version 2010 with Service Pack 1, when the customer sent me the package,. Undefined symbols for architecture arm64: "_AVPPluginSetDebugLogFunction" , referenced from : _OSXMediaPlayer_AVPPluginSetDebugLogFunction_m851849289 in Bulk_Assembly - CSharp_1. extern InvalidateImage (const char *); and later using it. "Invalid MEX-file '{path-to}/CCGHeart. To Solve undefined symbol: _PyUnicode_DecodeUnicodeEscape Error By using typed-ast latest version Issue was solved in My case. Last week I have upgraded this customer from Microsoft Dynamics GP Version 10. This last one provides the missing symbol: Raw. To allow printf () to write to a specific output the putch () function needs to be redefined. - Solution 2: Fixing This Error for a Different Program Recreating This Problem in Your Program - Code Example 1 - Adding the Main. Finally, comparing the size of the library files on this system and at another customer site with the exact same version of the software gave us the answer: somehow a different version of one of our library landed there and didn’t contain the missing symbol. The solution is to add following line right in the start of your c++ code: extern void *__dso_handle. Re: Undefined symbol: atexit « Reply #4 on: January 23, 2022, 04:44:54 pm » Yes, openssl deprecated a lot of cyphers in recent versions, but they are still in the Pascal bindings/header file. GitHub Gist: instantly share code, notes, and snippets. I have written a program to calculate the price of a phone call. Using the centos:8 Docker image as of this morning. rmrao opened this issue Jun 21, 2019 · 3 comments Comments. It means you haven't written a function, or you haven't created a variable, or you haven't linked against . Solved: error Undefined symbol. I have no idea why Turbo C isn't complaining about the user of the "public" keyword on them, but it's probably because it's a very old compiler. Solve a simple undefined symbols error These error messages tell us that line 6 of our code is using two symbols that have not yet been .
|
# NAG Library Function Document
## 1Purpose
nag_jumpdiff_merton_price (s30jac) computes the European option price using the Merton jump-diffusion model.
## 2Specification
#include #include
void nag_jumpdiff_merton_price (Nag_OrderType order, Nag_CallPut option, Integer m, Integer n, const double x[], double s, const double t[], double sigma, double r, double lambda, double jvol, double p[], NagError *fail)
## 3Description
nag_jumpdiff_merton_price (s30jac) uses Merton's jump-diffusion model (Merton (1976)) to compute the price of a European option. This assumes that the asset price is described by a Brownian motion with drift, as in the Black–Scholes–Merton case, together with a compound Poisson process to model the jumps. The corresponding stochastic differential equation is,
$dS S = α-λk dt + σ^ dWt + dqt .$
Here $\alpha$ is the instantaneous expected return on the asset price, $S$; ${\stackrel{^}{\sigma }}^{2}$ is the instantaneous variance of the return when the Poisson event does not occur; ${dW}_{t}$ is a standard Brownian motion; ${q}_{t}$ is the independent Poisson process and $k=E\left[Y-1\right]$ where $Y-1$ is the random variable change in the stock price if the Poisson event occurs and $E$ is the expectation operator over the random variable $Y$.
This leads to the following price for a European option (see Haug (2007))
$Pcall = ∑ j=0 ∞ e-λT λTj j! Cj S, X, T, r, σj′ ,$
where $T$ is the time to expiry; $X$ is the strike price; $r$ is the annual risk-free interest rate; ${C}_{j}\left(S,X,T,r,{\sigma }_{j}^{\prime }\right)$ is the Black–Scholes–Merton option pricing formula for a European call (see nag_bsm_price (s30aac)).
$σj′ = z2 + δ2 j T , z2 = σ2 - λ δ2 , δ2 = γ σ2 λ ,$
where $\sigma$ is the total volatility including jumps; $\lambda$ is the expected number of jumps given as an average per year; $\gamma$ is the proportion of the total volatility due to jumps.
The value of a put is obtained by substituting the Black–Scholes–Merton put price for ${C}_{j}\left(S,X,T,r,{\sigma }_{j}^{\prime }\right)$.
The option price ${P}_{ij}=P\left(X={X}_{i},T={T}_{j}\right)$ is computed for each strike price in a set ${X}_{i}$, $i=1,2,\dots ,m$, and for each expiry time in a set ${T}_{j}$, $j=1,2,\dots ,n$.
## 4References
Haug E G (2007) The Complete Guide to Option Pricing Formulas (2nd Edition) McGraw-Hill
Merton R C (1976) Option pricing when underlying stock returns are discontinuous Journal of Financial Economics 3 125–144
## 5Arguments
1: $\mathbf{order}$Nag_OrderTypeInput
On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. See Section 3.3.1.3 in How to Use the NAG Library and its Documentation for a more detailed explanation of the use of this argument.
Constraint: ${\mathbf{order}}=\mathrm{Nag_RowMajor}$ or $\mathrm{Nag_ColMajor}$.
2: $\mathbf{option}$Nag_CallPutInput
On entry: determines whether the option is a call or a put.
${\mathbf{option}}=\mathrm{Nag_Call}$
A call; the holder has a right to buy.
${\mathbf{option}}=\mathrm{Nag_Put}$
A put; the holder has a right to sell.
Constraint: ${\mathbf{option}}=\mathrm{Nag_Call}$ or $\mathrm{Nag_Put}$.
3: $\mathbf{m}$IntegerInput
On entry: the number of strike prices to be used.
Constraint: ${\mathbf{m}}\ge 1$.
4: $\mathbf{n}$IntegerInput
On entry: the number of times to expiry to be used.
Constraint: ${\mathbf{n}}\ge 1$.
5: $\mathbf{x}\left[{\mathbf{m}}\right]$const doubleInput
On entry: ${\mathbf{x}}\left[i-1\right]$ must contain ${X}_{\mathit{i}}$, the $\mathit{i}$th strike price, for $\mathit{i}=1,2,\dots ,{\mathbf{m}}$.
Constraint: ${\mathbf{x}}\left[\mathit{i}-1\right]\ge z\text{ and }{\mathbf{x}}\left[\mathit{i}-1\right]\le 1/z$, where $z={\mathbf{nag_real_safe_small_number}}$, the safe range parameter, for $\mathit{i}=1,2,\dots ,{\mathbf{m}}$.
6: $\mathbf{s}$doubleInput
On entry: $S$, the price of the underlying asset.
Constraint: ${\mathbf{s}}\ge z\text{ and }{\mathbf{s}}\le 1.0/z$, where $z={\mathbf{nag_real_safe_small_number}}$, the safe range parameter.
7: $\mathbf{t}\left[{\mathbf{n}}\right]$const doubleInput
On entry: ${\mathbf{t}}\left[i-1\right]$ must contain ${T}_{\mathit{i}}$, the $\mathit{i}$th time, in years, to expiry, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
Constraint: ${\mathbf{t}}\left[\mathit{i}-1\right]\ge z$, where $z={\mathbf{nag_real_safe_small_number}}$, the safe range parameter, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
8: $\mathbf{sigma}$doubleInput
On entry: $\sigma$, the annual total volatility, including jumps.
Constraint: ${\mathbf{sigma}}>0.0$.
9: $\mathbf{r}$doubleInput
On entry: $r$, the annual risk-free interest rate, continuously compounded. Note that a rate of 5% should be entered as 0.05.
Constraint: ${\mathbf{r}}\ge 0.0$.
10: $\mathbf{lambda}$doubleInput
On entry: $\lambda$, the number of expected jumps per year.
Constraint: ${\mathbf{lambda}}>0.0$.
11: $\mathbf{jvol}$doubleInput
On entry: the proportion of the total volatility associated with jumps.
Constraint: $0.0\le {\mathbf{jvol}}<1.0$.
12: $\mathbf{p}\left[{\mathbf{m}}×{\mathbf{n}}\right]$doubleOutput
Note: where ${\mathbf{P}}\left(i,j\right)$ appears in this document, it refers to the array element
• ${\mathbf{p}}\left[\left(j-1\right)×{\mathbf{m}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$;
• ${\mathbf{p}}\left[\left(i-1\right)×{\mathbf{n}}+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$.
On exit: ${\mathbf{P}}\left(i,j\right)$ contains ${P}_{ij}$, the option price evaluated for the strike price ${{\mathbf{x}}}_{i}$ at expiry ${{\mathbf{t}}}_{j}$ for $i=1,2,\dots ,{\mathbf{m}}$ and $j=1,2,\dots ,{\mathbf{n}}$.
13: $\mathbf{fail}$NagError *Input/Output
The NAG error argument (see Section 3.7 in How to Use the NAG Library and its Documentation).
## 6Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 2.3.1.2 in How to Use the NAG Library and its Documentation for further information.
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_INT
On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{m}}\ge 1$.
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 1$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 2.7.6 in How to Use the NAG Library and its Documentation for further information.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 2.7.5 in How to Use the NAG Library and its Documentation for further information.
NE_REAL
On entry, ${\mathbf{jvol}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{jvol}}\ge 0.0$ and ${\mathbf{jvol}}<1.0$.
On entry, ${\mathbf{lambda}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{lambda}}>0.0$.
On entry, ${\mathbf{r}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{r}}\ge 0.0$.
On entry, ${\mathbf{s}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{s}}\ge 〈\mathit{\text{value}}〉$ and ${\mathbf{s}}\le 〈\mathit{\text{value}}〉$.
On entry, ${\mathbf{sigma}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{sigma}}>0.0$.
NE_REAL_ARRAY
On entry, ${\mathbf{t}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{t}}\left[i\right]\ge 〈\mathit{\text{value}}〉$.
On entry, ${\mathbf{x}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{x}}\left[i\right]\ge 〈\mathit{\text{value}}〉$ and ${\mathbf{x}}\left[i\right]\le 〈\mathit{\text{value}}〉$.
## 7Accuracy
The accuracy of the output is dependent on the accuracy of the cumulative Normal distribution function, $\Phi$, occurring in ${C}_{j}$. This is evaluated using a rational Chebyshev expansion, chosen so that the maximum relative error in the expansion is of the order of the machine precision (see nag_cumul_normal (s15abc) and nag_erfc (s15adc)). An accuracy close to machine precision can generally be expected.
## 8Parallelism and Performance
nag_jumpdiff_merton_price (s30jac) is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
nag_jumpdiff_merton_price (s30jac) makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the x06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
None.
## 10Example
This example computes the price of a European call with jumps. The time to expiry is $3$ months, the stock price is $45$ and the strike price is $55$. The number of jumps per year is $3$ and the percentage of the total volatility due to jumps is $40%$. The risk-free interest rate is $10%$ per year and the total volatility is $25%$ per year.
### 10.1Program Text
Program Text (s30jace.c)
### 10.2Program Data
Program Data (s30jace.d)
### 10.3Program Results
Program Results (s30jace.r)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2017
|
• anonymous
what formula tells the cost in dollars if chocolate chip cookies are $1.50/a dozen and lemon frosteds are$2.50/a dozen? let c=number of dozens of chocolate chip cookies; L=number of dozens of lemon frosteds; T= total change A. T=4.00(L+C) B. T=1.50C+2.50L C.T=1.50C+2.50L D.T=2.50C+1.50L
Mathematics
Looking for something else?
Not the answer you are looking for? Search for more explanations.
|
Load factor as a result of constant roll rate
1. Feb 3, 2015
ENgez
The fuel pressure load equation for an aircraft in a constant roll maneuver has the term (as a result of the roll maneuver):
$\frac{\dot{\phi^2}}{2g}R^2$
where
$R$ - radius from center of fuselage
$g$ - gravitational constant
$\dot{\phi}$ - roll rate
my question is where does the "2" next to the gravitational constant come from?
2. Feb 9, 2015
|
# How do you simplify \frac{2}{\sqrt{3}}?
Apr 14, 2018
$\frac{2 \sqrt{3}}{3}$
#### Explanation:
You want to get rid of the radical in the denominator. The way to do this is to square the denominator so they cancel out. However, you must do this to both the numerator and the denominator to make it equal.
$\frac{2}{\sqrt{3}} = \frac{2}{\sqrt{3}} \cdot \frac{\sqrt{3}}{\sqrt{3}}$
Note that multiplying both sides by $\frac{\sqrt{3}}{\sqrt{3}}$ is fine because that simply equals $1$, and anything times $1$ is itself.
$= \frac{2 \sqrt{3}}{3}$ because $\sqrt{3} \cdot \sqrt{3}$ removes the radical and becomes the radicand, $3$.
|
Please, help us to better know about our user community by answering the following short survey: https://forms.gle/wpyrxWi18ox9Z5ae9
Explanation of the assertion on unaligned arrays
Hello! You are seeing this webpage because your program terminated on an assertion failure like this one:
my_program: path/to/eigen/Eigen/src/Core/DenseStorage.h:44:
Eigen::internal::matrix_array<T, Size, MatrixOptions, Align>::internal::matrix_array()
[with T = double, int Size = 2, int MatrixOptions = 2, bool Align = true]:
Assertion (reinterpret_cast<size_t>(array) & (sizemask)) == 0 && "this assertion
is explained here: http://eigen.tuxfamily.org/dox-devel/group__TopicUnalignedArrayAssert.html
READ THIS WEB PAGE !!! ****"' failed.
There are 4 known causes for this issue. If you can target [c++17] only with a recent compiler (e.g., GCC>=7, clang>=5, MSVC>=19.12), then you're lucky: enabling c++17 should be enough (if not, please report to us). Otherwise, please read on to understand those issues and learn how to fix them.
# Where in my own code is the cause of the problem?
First of all, you need to find out where in your own code this assertion was triggered from. At first glance, the error message doesn't look helpful, as it refers to a file inside Eigen! However, since your program crashed, if you can reproduce the crash, you can get a backtrace using any debugger. For example, if you're using GCC, you can use the GDB debugger as follows:
\$ gdb ./my_program # Start GDB on your program
> run # Start running your program
... # Now reproduce the crash!
> bt # Obtain the backtrace
Now that you know precisely where in your own code the problem is happening, read on to understand what you need to change.
# Cause 1: Structures having Eigen objects as members
If you have code like this,
class Foo
{
//...
//...
};
//...
Foo *foo = new Foo;
The matrix class, also used for vectors and row-vectors.
Definition: Matrix.h:180
then you need to read this separate page: Structures Having Eigen Members.
Note that here, Eigen::Vector4d is only used as an example, more generally the issue arises for all fixed-size vectorizable Eigen types.
# Cause 2: STL Containers or manual memory allocation
If you use STL Containers such as std::vector, std::map, ..., with Eigen objects, or with classes containing Eigen objects, like this,
std::vector<Eigen::Matrix2d> my_vector;
struct my_class { ... Eigen::Matrix2d m; ... };
std::map<int, my_class> my_map;
then you need to read this separate page: Using STL Containers with Eigen.
Note that here, Eigen::Matrix2d is only used as an example, more generally the issue arises for all fixed-size vectorizable Eigen types and structures having such Eigen objects as member.
The same issue will be exhibited by any classes/functions by-passing operator new to allocate memory, that is, by performing custom memory allocation followed by calls to the placement new operator. This is for instance typically the case of std::make_shared or std::allocate_shared for which is the solution is to use an aligned allocator as detailed in the solution for STL containers.
# Cause 3: Passing Eigen objects by value
If some function in your code is getting an Eigen object passed by value, like this,
void func(Eigen::Vector4d v);
then you need to read this separate page: Passing Eigen objects by value to functions.
Note that here, Eigen::Vector4d is only used as an example, more generally the issue arises for all fixed-size vectorizable Eigen types.
# Cause 4: Compiler making a wrong assumption on stack alignment (for instance GCC on Windows)
This is a must-read for people using GCC on Windows (like MinGW or TDM-GCC). If you have this assertion failure in an innocent function declaring a local variable like this:
void foo()
{
//...
}
The quaternion class used to represent 3D orientations and rotations.
Definition: Quaternion.h:274
then you need to read this separate page: Compiler making a wrong assumption on stack alignment.
Note that here, Eigen::Quaternionf is only used as an example, more generally the issue arises for all fixed-size vectorizable Eigen types.
# General explanation of this assertion
Fixed-size vectorizable Eigen objects must absolutely be created at properly aligned locations, otherwise SIMD instructions addressing them will crash. For instance, SSE/NEON/MSA/Altivec/VSX targets will require 16-byte-alignment, whereas AVX and AVX512 targets may require up to 32 and 64 byte alignment respectively.
Eigen normally takes care of these alignment issues for you, by setting an alignment attribute on them and by overloading their operator new.
However there are a few corner cases where these alignment settings get overridden: they are the possible causes for this assertion.
# I don't care about optimal vectorization, how do I get rid of that stuff?
Three possibilities:
• Use the DontAlign option to Matrix, Array, Quaternion, etc. objects that gives you trouble. This way Eigen won't try to over-align them, and thus won"t assume any special alignment. On the down side, you will pay the cost of unaligned loads/stores for them, but on modern CPUs, the overhead is either null or marginal. See here for an example.
• Define EIGEN_MAX_STATIC_ALIGN_BYTES to 0. That disables all 16-byte (and above) static alignment code, while keeping 16-byte (or above) heap alignment. This has the effect of vectorizing fixed-size objects (like Matrix4d) through unaligned stores (as controlled by EIGEN_UNALIGNED_VECTORIZE ), while keeping unchanged the vectorization of dynamic-size objects (like MatrixXd). On 64 bytes systems, you might also define it 16 to disable only 32 and 64 bytes of over-alignment. But do note that this breaks ABI compatibility with the default behavior of static alignment.
• Or define both EIGEN_DONT_VECTORIZE and EIGEN_DISABLE_UNALIGNED_ARRAY_ASSERT. This keeps the 16-byte (or above) alignment code and thus preserves ABI compatibility, but completely disables vectorization.
If you want to know why defining EIGEN_DONT_VECTORIZE` does not by itself disable 16-byte (or above) alignment and the assertion, here's the explanation:
It doesn't disable the assertion, because otherwise code that runs fine without vectorization would suddenly crash when enabling vectorization. It doesn't disable 16-byte (or above) alignment, because that would mean that vectorized and non-vectorized code are not mutually ABI-compatible. This ABI compatibility is very important, even for people who develop only an in-house application, as for instance one may want to have in the same application a vectorized path and a non-vectorized path.
# How can I check my code is safe regarding alignment issues?
Unfortunately, there is no possibility in c++ to detect any of the aforementioned shortcoming at compile time (though static analyzers are becoming more and more powerful and could detect some of them). Even at runtime, all we can do is to catch invalid unaligned allocation and trigger the explicit assertion mentioned at the beginning of this page. Therefore, if your program runs fine on a given system with some given compilation flags, then this does not guarantee that your code is safe. For instance, on most 64 bits systems buffer are aligned on 16 bytes boundary and so, if you do not enable AVX instruction set, then your code will run fine. On the other hand, the same code may assert if moving to a more exotic platform, or enabling AVX instructions that required 32 bytes alignment by default.
The situation is not hopeless though. Assuming your code is well covered by unit test, then you can check its alignment safety by linking it to a custom malloc library returning 8 bytes aligned buffers only. This way all alignment shortcomings should pop-up. To this end, you must also compile your program with EIGEN_MALLOC_ALREADY_ALIGNED=0 .
|
Home » » When a pentavalent impurity is added to a pure semiconductor, it becomes
# When a pentavalent impurity is added to a pure semiconductor, it becomes
### Question
When a pentavalent impurity is added to a pure semiconductor, it becomes
### Options
A) a p-type semiconductor
B) an n-type semiconductor
C) an insulator
D) an intrinsic semi-conductor
|
If you find any mistakes, please make a comment! Thank you.
## A general linear group over a field is finite if and only if the field is finite
Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 1.4 Exercise 1.4.5
Let $F$ be a field. Show that $GL_n(F)$ is a finite group if and only if $F$ is finite.
Solution: Suppose $F$ is finite. Then there are only finitely many $n \times n$ matrices over $F$, in particular, $|F|^{n^2}$. Thus there are at most $|F|^{n^2}$ elements in $GL_n(F)$.
Suppose now that $F$ is infinite. Note that for all $\alpha \in F$, the matrix $A_\alpha$ defined such that $a_{i,j} = \alpha$ if $i = j = 1$, 1 if $i = j \neq 1$, and $0$ otherwise has determinant $\alpha$ and so is in $GL_n(F)$. Thus $GL_n(F)$ is infinite.
|
# ISEE Lower Level Math : How to subtract
## Example Questions
← Previous 1 3 4
### Example Question #11 : Money And Time
Henry had $538.23 in his checking account at the bank before he went shopping. At the mall, he spent$43.91 at one store and $71.84 at another store. How much money does Henry have left in his bank account? Possible Answers:$422.48
$422.84$423.52
$423.48 Correct answer:$422.48
Explanation:
To find the difference, you must subtract. But first you must add the two amounts he spent at the mall:
Now subtract. Line up the numbers vertically. Remember to use the rules of borrowing to subtract.
Henry now has $422.48 in his bank account. ### Example Question #2 : How To Subtract Evaluate: Possible Answers: Correct answer: Explanation: is the same as . ### Example Question #1 : How To Subtract The total combined weight a 4 boxes is 25 lbs. Box A weighs 4 lbs, box B weighs 10 lbs, and box D weighs 6 lbs. How much does box C weigh? Possible Answers: 10 lbs 5 lbs 4 lbs 12 lbs 6 lbs Correct answer: 5 lbs Explanation: To find the weight of box C, subtract the weight of the other three boxes from the total weight. Box C = Total - Box A - Box B - Box D Box C = 25 lbs - 4 lbs - 10 lbs - 6 lbs Box C = 5 lbs ### Example Question #4 : How To Subtract Solve the following number sentence. Possible Answers: None of the other answers. Correct answer: Explanation: To successfully solve this question, the order of operations must be used. There are 4 steps in the order of operations and they are as follows: 1. P - Solve all problems within a parentheses. 2. E - Solve any numbers that have an exponent. 3. MD - From the left side of the problem to the right side of the problem solve all Division and Multiplication. 4. AS - From the left side of the problem to the right side of the problem solve all Subtraction and Addition. Based on the rules the first part of the problem to be solved is as it is in parentheses. The other parts of the problem remain the same and are re-written until their step is reached. Next the exponent is to be solved. The exponent number in this problem is which means to multuply the base to itself times. Next looking from the left to the right of the problem there are no multiplication or division calculations to do so skip to the last step. Finally, starting from the left side of the problem and moving to the right side of the problem each set of two numbers will be solved until the final answer is reached. Starting with then and lastly . The final answer would be . ### Example Question #5 : How To Subtract Sam is shipping five boxes, which weigh 50 pounds in total. The first four boxes weigh 8 pounds, 12 pounds, 9 pounds, and 15 pounds. How much does the last box weigh? Possible Answers: Correct answer: Explanation: First, add the first four boxes together: Now, subtract this number from the total to find the weight of the remaining box: ### Example Question #6 : How To Subtract Evaluate: Possible Answers: Correct answer: Explanation: First, adding negative numbers is the same as subtracting them: Now subtract: ### Example Question #7 : How To Subtract Mark went to the grocery store to buy ingredients to make a cake. He bought eggs for$2.99, flour for $4.99, sugar for$3.99, and butter for $2.50. If he paid the cashier with a$20 bill, how much money did he recieve in change?
Explanation:
Since the question is asking how much money did Mark have left over, we need to subtract the cost of each item he bought from $20. There are two ways of solving this problem: • We can add up the cost of all the ingredients and then subtract the total cost from$20.00
• We can subtract the cost of each item from $20.00. Note, if we choose to solve the problem this way, we must follow order of operations and go from left to right ### Example Question #8 : How To Subtract Solve: Possible Answers: Correct answer: Explanation: In order to solve this problem, you must follow order of operations. One way of remembering the correct order is to remember the acronym PEMDAS. PEMDAS reminds you what you need to do first. • P = Parentheses • E = Exponents • M = Multiplication • D = Division • A = Addition • S = Subtraction Note: For Multiplication and Division, and Addition and Subtraction, you do whatever operation comes first from left to right. So, if subtraction comes before addition, you do the subtraction first. In this question, you are asked to solve: First, we subtract from since it is in the parenthesis (PEMDAS) Second, we multiply by (PEMDAS) Third, we subtract from . We subtract from first because we have to go from left to right (PEMDAS) Lastly, we subtract from to get the answer. ### Example Question #9 : How To Subtract Sally bought 3 different picture frames and paid a total of$60. If the first frame cost $15 and the second frame cost$25, how much did the third frame cost?
Explanation:
In order to find out how much the third frame costs, we need to subtract the cost of the first two frames ($15 and$25) from the total amount ($60). There are two ways to subtract the cost of the first two frames from the total amount: Method 1 We can subtract the total amount spent on the first two frames from$60
Remember to follow Order of Operations (PEMDAS) and add the numbers in parenthesis first.
The cost of the third frame was $20 Method 2 We can subtract the cost of each frame from$60
The cost of the third frame was \$20
### Example Question #2 : How To Subtract
Which of the following numbers can complete the sequence below?
|
# Package de.lmu.ifi.dbs.elki.algorithm.itemsetmining.associationrules.interest
Association rule interestingness measures.
See: Description
• Interface Summary
Interface Description
InterestingnessMeasure
Interface for interestingness measures.
• Class Summary
Class Description
Added value (AV) interestingness measure: $$\text{confidence}(X \rightarrow Y) - \text{support}(Y) = P(Y|X)-P(Y)$$.
CertaintyFactor
Certainty factor (CF; Loevinger) interestingness measure. $$\tfrac{\text{confidence}(X \rightarrow Y) - \text{support}(Y)}{\text{support}(\neg Y)}$$.
Confidence
Confidence interestingness measure, $$\tfrac{\text{support}(X \cup Y)}{\text{support}(X)} = \tfrac{P(X \cap Y)}{P(X)}=P(Y|X)$$.
Conviction
Conviction interestingness measure: $$\frac{P(X) P(\neg Y)}{P(X\cap\neg Y)}$$.
Cosine
Cosine interestingness measure, $$\tfrac{\text{support}(A\cup B)}{\sqrt{\text{support}(A)\text{support}(B)}} =\tfrac{P(A\cap B)}{\sqrt{P(A)P(B)}}$$.
GiniIndex
Gini-index based interestingness measure, using the weighted squared conditional probabilities compared to the non-conditional priors.
Jaccard
Jaccard interestingness measure: $\tfrac{\text{support}(A \cup B)}{\text{support}(A \cap B)} =\tfrac{P(A \cap B)}{P(A)+P(B)-P(A \cap B)} =\tfrac{P(A \cap B)}{P(A \cup B)}$ Reference: P.
JMeasure
J-Measure interestingness measure.
Klosgen
Klösgen interestingness measure.
Leverage
Leverage interestingness measure.
Lift
Lift interestingness measure.
## Package de.lmu.ifi.dbs.elki.algorithm.itemsetmining.associationrules.interest Description
Association rule interestingness measures.
Much of the confusion with these measures arises from the anti-monotonicity of itemsets, which are omnipresent in the literature.
In the itemset notation, the itemset $$X$$ denotes the set of matching transactions $$\{T|X\subseteq T\}$$ that contain the itemset $$X$$. If we enlarge $$Z=X\cup Y$$, the resulting set shrinks: $$\{T|Z\subseteq T\}=\{T|X\subseteq T\}\cap\{T|Y\subseteq T\}$$.
Because of this: $$\text{support}(X\cup Y) = P(X \cap Y)$$ and $$\text{support}(X\cap Y) = P(X \cup Y)$$. With "support" and "confidence", it is common to see the reversed semantics (the union on the constraints is the intersection on the matches, and conversely); with probabilities it is common to use "events" as in frequentist inference.
To make things worse, the "support" is sometimes in absolute (integer) counts, and sometimes used in a relative share.
|
The cpmdist.zip file is a binary distribution intended to demonstrate Cowgol on Z80 CP/M systems. It needs a platform with a Z80 and at least 50kB of TPA. (Cowgol can be compiled for smaller. Email me.)
## Installation
There’s nothing standard about CP/M, so I’m not even going to try to make turnkey instructions.
The zipfile contains two directories, a and b. These are intended to go on floppies A: and B: respectively, with B: containing the Cowgol binaries and source and A: being a work disk. Copy the contents onto the disks of your choice using your favourite CP/M disk copying tool: sorry, I can’t advise on this.
Once done, boot your system. Now do SUBMIT COMPILE and, hopefully, the compiler will run, compiling the TESTPROG.COW source file, and leave you a runnable binary called COW.COM.
## Known issues
There are many, of which this is a non-exhaustive list.
• it’s dead slow. There are several reasons, but the biggest of which is that Cowgol is a whole-program compiler and every time you invoke it it has to recompile the standard libraries. It’s possible to partially precompile these, which will help, but this isn’t implemented yet for CP/M.
• standard libraries — there aren’t really many for CP/M. You get console printing and file access, and that’s really it. Anything else you’ll have to bind yourself. Sorry.
• the compiler script — CP/M 2.2 can’t detect errors (Cowgol does try to set the exit status properly, though). So it’ll just barrel on if something goes wrong. Any suggestions? Email me.
• code generation bugs. Haven’t run into one for a while, doesn’t mean they’re not still out there somewhere…
|
### - Art Gallery -
In mathematics, especially in the area of abstract algebra dealing with ordered structures on abelian groups, the Hahn embedding theorem gives a simple description of all linearly ordered abelian groups. It is named after Hans Hahn.
Overview
The theorem states that every linearly ordered abelian group G can be embedded as an ordered subgroup of the additive group ℝΩ endowed with a lexicographical order, where ℝ is the additive group of real numbers (with its standard order), Ω is the set of Archimedean equivalence classes of G, and ℝΩ is the set of all functions from Ω to ℝ which vanish outside a well-ordered set.
Let 0 denote the identity element of G. For any nonzero element g of G, exactly one of the elements g or −g is greater than 0; denote this element by |g|. Two nonzero elements g and h of G are Archimedean equivalent if there exist natural numbers N and M such that N|g| > |h| and M|h| > |g|. Intuitively, this means that neither g nor h is "infinitesimal" with respect to the other. The group G is Archimedean if all nonzero elements are Archimedean-equivalent. In this case, Ω is a singleton, so ℝΩ is just the group of real numbers. Then Hahn's Embedding Theorem reduces to Hölder's theorem (which states that a linearly ordered abelian group is Archimedean if and only if it is a subgroup of the ordered additive group of the real numbers).
Gravett (1956) gives a clear statement and proof of the theorem. The papers of Clifford (1954) and Hausner & Wendel (1952) together provide another proof. See also Fuchs & Salce (2001, p. 62).
Archimedean group
References
Fuchs, László; Salce, Luigi (2001), Modules over non-Noetherian domains, Mathematical Surveys and Monographs, 84, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-1963-0, MR 1794715
Ehrlich, Philip (1995), "Hahn's "Über die nichtarchimedischen Grössensysteme" and the Origins of the Modern Theory of Magnitudes and Numbers to Measure Them", in Hintikka, Jaakko (ed.), From Dedekind to Gödel: Essays on the Development of the Foundations of Mathematics (PDF), Kluwer Academic Publishers, pp. 165–213
Hahn, H. (1907), "Über die nichtarchimedischen Größensysteme.", Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften, Wien, Mathematisch - Naturwissenschaftliche Klasse (Wien. Ber.) (in German), 116: 601–655
Gravett, K. A. H. (1956), "Ordered Abelian Groups", The Quarterly Journal of Mathematics. Oxford. Second Series, 7: 57–63, doi:10.1093/qmath/7.1.57
Clifford, A.H. (1954), "Note on Hahn's Theorem on Ordered Abelian Groups", Proceedings of the American Mathematical Society, 5 (6): 860–863, doi:10.2307/2032549
Hausner, M.; Wendel, J.G. (1952), "Ordered vector spaces", Proceedings of the American Mathematical Society, 3: 977–982, doi:10.1090/S0002-9939-1952-0052045-1
Undergraduate Texts in Mathematics
Graduate Texts in Mathematics
Graduate Studies in Mathematics
Mathematics Encyclopedia
World
Index
|
Home >> AIMS
# Let two numbers have arithmetic mean 9 and geometric mean 4. Then these numbers are the roots of the quadractic equation.
$\begin{array}{1 1}(A)\;x^2+18x+16=0 \\(B)\;x^2-18x-16=0 \\(C)\;x^2+18x-16=0 \\ (D)\;x^2-18x+16=0 \end{array}$
$x^2-18x+16=0$
Hence D is the correct answer.
|
# Algebraic Topology and the Arnold Conjecture
### 4:00 PM, October 30, 1997
Just before his death, Poincar\'e stated a theorem' which had arisen in his studies on the celestial mechanics of the solar system: an area preserving self-diffeomorphism of the annulus which rotates the boundary circles in opposite directions leaves at least two points fixed. After many years, this result was proven by G. D. Birkhoff using very special arguments in two dimensions. In the 1960's, V. Arnold saw how to generalize the Poincar\'e-Birkhoff Twist' Theorem in the light of modern symplectic geometry. He conjectured that the number of fixed points of a Hamiltonian symplectomorphism (i.e. the analogue of the annular twist) on a symplectic manifold is at least as great as the number of critical points for any function on the manifold. In recent times, as a consequence of Lusternik-Schnirelmann theory and the difficulty of the original conjecture, the Arnold conjecture was reformulated, replacing the number of critical points by the cuplength of the manifold. In this form, Floer proved the conjecture for manifolds M with $\pi_2(M) = 0$ by creating a magnificent new Morse-type analytic homology theory, Floer homology. This talk will focus on how more classical algebraic topological invariants may be used to prove the original Arnold conjecture.
|
# All Questions
5k views
### How can I evaluate $\sum_{n=0}^\infty (n+1)x^n$
How can I evaluate $$\sum_{n=1}^\infty \frac{2n}{3^{n+1}}$$ I know the answer thanks to Wolfram Alpha, but I'm more concerned with how I can derive that answer. It cites tests to prove that it is ...
16k views
### Is .999999999… = 1?
I'm told by smart people that $0.999999999... = 1$ and I believe them, but is there a proof that explains why this is?
14k views
### Why does $1+2+3+\dots = -\frac{1}{12}$?
$\displaystyle\sum_{n=1}^\infty \frac{1}{n^s}$ only converges to $\zeta(s)$ if $\text{Re}(s) > 1$. Why should analytically continuing to $\zeta(-1)$ give the right answer?
5k views
### How to use the Extended Euclidean Algorithm manually?
I've only found a recursive algorithm of the extended Euclidean algorithm. I'd like to know how to use it by hand. Any idea?
30k views
### Is $\frac{dy}{dx}$ not a ratio?
In the book Thomas's Calculus (11th edition) it is mentioned (Section 3.8 pg 225) that the derivative $\frac{dy}{dx}$ is not a ratio. Couldn't it be interpreted as a ratio, because according to the ...
19k views
### Different methods to compute $\sum\limits_{n=1}^\infty \frac{1}{n^2}$
As I have heard people did not trust Euler when he first discovered the formula $$\zeta(2)=\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}.$$ However, Euler was Euler and he gave other proofs. I ...
6k views
### How to prove that $\lim\limits_{x\to0}\frac{\sin x}x=1$?
How can one prove the statement $$\lim\limits_{x\to 0}\frac{\sin x}x=1$$ without using the Taylor series of $\sin$, $\cos$ and $\tan$? Best would be a geometrical solution. This is homework. In my ...
5k views
### Prove that $\gcd(a^n - 1, a^m - 1) = a^{\gcd(n, m)} - 1$
For all $a, m, n \in \mathbb{Z}^+$, $$\gcd(a^n - 1, a^m - 1) = a^{\gcd(n, m)} - 1$$
7k views
### Value of $\sum\limits_n x^n$
Why is $\displaystyle \sum\limits_{n=0}^{\infty} 0.7^n$ equal $1/(1-0.7) = 10/3$ ? Can we generalize the above to $\displaystyle \sum_{n=0}^{\infty} x^n = \frac{1}{1-x}$ ? Are there some ...
7k views
### Zero to the zero power - Is $0^0=1$?
Could someone provide me with good explanation of why $0^0 = 1$? My train of thought: $x > 0$ $0^x = 0^{x-0} = 0^x/0^0$, so $0^0 = 0^x/0^x = ?$ Possible answers: $0^0 * 0^x = 1 * 0^x$, so ...
7k views
### How can you prove that a function has no closed form integral?
I've come across statements in the past along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations: addition/subtraction ...
8k views
### Proving $\int_{0}^{\infty} \mathrm{e}^{-x^2} dx = \dfrac{\sqrt \pi}{2}$
How to prove $$\int_{0}^{\infty} \mathrm{e}^{-x^2}\, dx = \frac{\sqrt \pi}{2}$$
15k views
### What is 48÷2(9+3)? [duplicate]
There is a huge debate on the internet on $48÷2(9+3)$. I figured if i wanted to know the answer this is the best place to ask. I believe it is 2 as i believe it is part of the bracket operation in ...
6k views
### $\sqrt a$ is either an integer or an irrational number.
I got this interesting question in my mind: How do we prove that if $a \in \mathbb N$, then $\sqrt a$ is an integer or an irrational number? Can we extend this result? That is, can it be shown ...
5k views
### How to show that $f(x)=x^2$ is continuous at $x=1$?
How to show that $f(x)=x^2$ is continuous at $x=1$?
20k views
### Is value of $\pi = 4$?
What is wrong with this? SOURCE
2k views
### If $a \mid m$ and $(a + 1) \mid m$, prove $a(a + 1) | m$.
Can anyone help me out here? Can't seem to find the right rules of divisibility to show this: If $a \mid m$ and $(a + 1) \mid m$, then $a(a + 1) \mid m$.
2k views
### $-1$ is not $1$, so where is the mistake?
I know there must be something unmathematical in the following but I don't know where it is: \begin{align} \sqrt{-1} &= i \\ \\ \frac1{\sqrt{-1}} &= \frac1i \\ \\ \frac{\sqrt1}{\sqrt{-1}} ...
3k views
### Division by $0$
I thought it was elementary to me, but I started to do some exercises and came up some definitions I have sort of difficulty to distinguish. In parentheses are my questions. $\dfrac {x}{0}$ is ...
3k views
2k views
### $i^2$ why is it $-1$ when you can show it is $1$? [duplicate]
We know $$i^2=-1$$then why does this happen? $$i^2 = \sqrt{-1}\times\sqrt{-1}$$ $$=\sqrt{-1\times-1}$$ $$=\sqrt{1}$$ $$= 1$$ EDIT: I see this has been dealt with before but at least with ...
4k views
### The square roots of different primes are linearly independent over the field of rationals
I need to find a way of proving that the square roots of a finite set of different primes are linearly independent over the field of rationals. I've tried to solve the problem using ...
13k views
### Solving the integral $\int_{0}^{\infty} \frac{\sin{x}}{x} \ dx = \frac{\pi}{2}$?
A famous exercise which one encounters while doing Complex Analysis (Residue theory) is to prove that the given integral: $$\int_{0}^{\infty} \frac{\sin x}{x} \ dx = \frac{\pi}{2}$$ Well, can anyone ...
3k views
### Evaluating $\lim_{n\to\infty} e^{-n} \sum\limits_{k=0}^{n} \frac{n^k}{k!}$
I'm supposed to calculate: $$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$ By using W|A, i may guess that the limit is $\frac{1}{2}$ that is a pretty interesting and nice result. I ...
2k views
### Closed form of integral.
I've been looking at $$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$ It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example: ...
2k views
### How to solve $x^3=-1$?
How to solve $x^3=-1$? I got following: $x^3=-1$ $x=(-1)^{\frac{1}{3}}$ $x=\frac{(-1)^{\frac{1}{2}}}{(-1)^{\frac{1}{6}}}=\frac{i}{(-1)^{\frac{1}{6}}}$...
4k views
### Proving the identity $\sum\limits_{k=1}^n {k^3} = {\Large(}\sum\limits_{k=1}^n k{\Large)}^2$ without induction
I recently proved that $$\sum_{k=1}^n k^3 = \left(\sum_{k=1}^n k \right)^2$$ Using mathematical induction. I'm interested if there's an intuitive explanation, or even a combinatorial ...
6k views
### Proof that $\sum\limits_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6}$?
I am just starting into calculus and I have a question about the following statement I encountered while learning about definite integrals: $$\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}$$ I really ...
15k views
### Examples of apparent patterns that eventually fail
Often, when I try to describe mathematics to the layman, I find myself struggling to convince them of the importance and consequence of 'proof'. I receive responses like: "surely if the Collatz ...
45k views
### Multiple-choice question about the probability of a random answer to itself being correct
I found this math "problem" on the internet, and I'm wondering if it has an answer: Question: If you choose an answer to this question at random, what is the probability that you will be correct? ...
7k views
### Striking applications of integration by parts
What are your favorite applications of integration by parts? (The answers can be as lowbrow or highbrow as you wish. I'd just like to get a bunch of these in one place!) Thanks for your ...
8k views
### Norms Induced by Inner Products and the Parallelogram Law
Let $V$ be a normed vector space (over $\mathbb{R}$, say, for simplicity) with norm $\lVert\cdot\rVert$. It's not hard to show that if $\lVert \cdot \rVert = \sqrt{\langle \cdot, \cdot \rangle}$ ...
12k views
### Proof for formula for sum of sequence $1+2+3+\ldots+n$?
Apparently $1+2+3+4+\ldots+n = \dfrac{n\times(n+1)}2$. How? What's the proof? Or maybe it is self apparent just looking at the above? Does this problem have a name and maybe a presence on the net? ...
|
# Droplet sample preparation
## Introduction
Mass spectrometry methods have enabled quantifying thousands of proteins at the single cell level. These methods open the door to many biological opportunities, such as characterizing heterogeneity in the tumor micro-environment, signaling pathways driving stem cell differentiation, and intrinsically single-cell processes, such as the cell division cycle. To further advance single-cell MS analysis, we developed an automated nano-ProteOmic sample Preparation (nPOP). nPOP uses piezo acoustic dispensing to isolate individual cells in 300 picoliter volumes and performs all subsequent preparation steps in small droplets on a hydrophobic slide. This allows massively parallel sample preparation, including lysing, digesting, and labeling individual cells in volumes below 20 nanoliters.
## Application
Single-cell protein analysis using nPOP classified cells by cell type and by cell cycle phase. Furthermore, the data allowed us to quantify the covariation between cell cycle protein markers and thousands of proteins. Based on this covariation, we identify cell cycle associated proteins and functions that are shared across cell types and those that differ between cell types.
## Raw Data from experiments benchmarking nPOP
• MassIVE Repository:
## Processed Data from experiments benchmarking nPOP
• Peptides-raw.csv
• Peptides x single cells at 1% FDR. The first columns list the peptide sequences and each subsequent column corresponds to a single cell. Peptide identification is based on spectra analyzed by MaxQuant and is enhanced by using DART-ID to incorporate retention time information. See Specht et al., 2019 for details.
• Proteins-processed.csv
• Proteins x single cells at 1% FDR, imputed and batch corrected.
• Cells.csv
• Annotation x single cells. Each column corresponds to a single cell and the rows include relevant metadata, such as, cell type, measurements from the isolation of the cell, and derivative quantities, i.e., rRI, CVs, reliability. This file corresponds to Proteins-processed.csv and Peptides-raw.csv files.
• HeLa-proteins.csv
• Proteins x single cells for HeLa cells at 1% FDR, unimputed and z-scored.
• U-937-proteins.csv
• Proteins x single cells for U-937 cells at 1% FDR, unimputed and z-scored.
|
International
Tables for
Crystallography
Volume D
Physical properties of crystals
Edited by A. Authier
International Tables for Crystallography (2013). Vol. D, ch. 1.7, pp. 182-183
## Section 1.7.2.1.2. Linear and nonlinear susceptibilities
B. Boulangera* and J. Zyssb
aInstitut Néel CNRS Université Joseph Fourier, 25 rue des Martyrs, BP 166, 38042 Grenoble Cedex 9, France, and bLaboratoire de Photonique Quantique et Moléculaire, Ecole Normale Supérieure de Cachan, 61 Avenue du Président Wilson, 94235 Cachan, France
Correspondence e-mail: [email protected]
#### 1.7.2.1.2. Linear and nonlinear susceptibilities
| top | pdf |
Whereas the polarization response has been expressed so far in the time domain, in which causality and time invariance are most naturally expressed, Fourier transformation into the frequency domain permits further simplification of the equations given above and the introduction of the susceptibility tensors according to the following derivation.
The direct and inverse Fourier transforms of the field are defined aswhere as E(t) is real.
#### 1.7.2.1.2.1. Linear susceptibility
| top | pdf |
By substitution of (1.7.2.15) in (1.7.2.7),where
In these equations, to satisfy the energy conservation condition that will be generalized in the following. In order to ensure convergence of χ(1), ω has to be taken in the upper half plane of the complex plane. The reality of R(1) implies that .
#### 1.7.2.1.2.2. Second-order susceptibility
| top | pdf |
Substitution of (1.7.2.15) in (1.7.2.12) yieldsorwithand . Frequencies ω1 and ω2 must be in the upper half of the complex plane to ensure convergence. Reality of R(2) implies . is invariant under the interchange of the (α, ω1) and (β, ω2) pairs.
#### 1.7.2.1.2.3. nth-order susceptibility
| top | pdf |
Substitution of (1.7.2.15) in (1.7.2.14) provideswhereand .
All frequencies must lie in the upper half complex plane and reality of χ(n) imposesIntrinsic permutation symmetry implies that is invariant with respect to the n! permutations of the (αi, ωi) pairs.
|
# Why training and validation similar loss curves lead to poor performance
I am training a binary classification neural network model using matlab the graph that I got using 20 neurons in hidden layer is given below. the confusion matrix and graph between cross entropy vs epochs.
to prevent overfitting in a model the training curve in a loss graph should be similar to the validation curve.
but in the current situation the third graph shows curve where validation curve is similar to training although the overall accuracy is low as compared to the curve where the two curve diverges in the above plot.
WHy this is happening and what I am doing wrong in understanding these curves?
to prevent overfitting in a model the training curve in a loss graph should be similar to the validation curve
|
What is 133/25 as a decimal?
Solution and how to convert 133 / 25 into a decimal
133 / 25 = 5.32
133/25 or 5.32 can be represented in multiple ways (even as a percentage). The key is knowing when we should use each representation and how to easily transition between a fraction, decimal, or percentage. Decimals and Fractions represent parts of numbers, giving us the ability to represent smaller numbers than the whole. Choosing which to use starts with the real life scenario. Fractions are clearer representation of objects (half of a cake, 1/3 of our time) while decimals represent comparison numbers a better (.333 batting average, pricing: \$1.50 USD). If we need to convert a fraction quickly, let's find out how and when we should.
133/25 is 133 divided by 25
The first step of teaching our students how to convert to and from decimals and fractions is understanding what the fraction is telling is. 133 is being divided into 25. Think of this as our directions and now we just need to be able to assemble the project! Fractions have two parts: Numerators on the top and Denominators on the bottom with a division symbol between or 133 divided by 25. We must divide 133 into 25 to find out how many whole parts it will have plus representing the remainder in decimal form. This is our equation:
Numerator: 133
• Numerators are the portion of total parts, showed at the top of the fraction. Overall, 133 is a big number which means you'll have a significant number of parts to your equation. The bad news is that it's an odd number which makes it harder to covert in your head. Large two-digit conversions are tough. Especially without a calculator. Let's take a look at the denominator of our fraction.
Denominator: 25
• Denominators represent the total parts, located at the bottom of the fraction. 25 is a large number which means you should probably use a calculator. But the bad news is that odd numbers are tougher to simplify. Unfortunately and odd denominator is difficult to simplify unless it's divisible by 3, 5 or 7. Ultimately, don't be afraid of double-digit denominators. Next, let's go over how to convert a 133/25 to 5.32.
Converting 133/25 to 5.32
Step 1: Set your long division bracket: denominator / numerator
$$\require{enclose} 25 \enclose{longdiv}{ 133 }$$
Use long division to solve step one. This is the same method we all learned in school when dividing any number against itself and we will use the same process for number conversion as well.
Step 2: Solve for how many whole groups you can divide 25 into 133
$$\require{enclose} 00.5 \\ 25 \enclose{longdiv}{ 133.0 }$$
How many whole groups of 25 can you pull from 1330? 125 Multiply this number by 25, the denominator to get the first part of your answer!
Step 3: Subtract the remainder
$$\require{enclose} 00.5 \\ 25 \enclose{longdiv}{ 133.0 } \\ \underline{ 125 \phantom{00} } \\ 1205 \phantom{0}$$
If there is no remainder, you’re done! If you have a remainder over 25, go back. Your solution will need a bit of adjustment. If you have a number less than 25, continue!
Step 4: Repeat step 3 until you have no remainder
In some cases, you'll never reach a remainder of zero. Looking at you pi! And that's okay. Find a place to stop and round to the nearest value.
Why should you convert between fractions, decimals, and percentages?
Converting fractions into decimals are used in everyday life, though we don't always notice. Remember, fractions and decimals are both representations of whole numbers to determine more specific parts of a number. Same goes for percentages. It’s common for students to hate learning about decimals and fractions because it is tedious. But they all represent how numbers show us value in the real world. Here are examples of when we should use each.
When you should convert 133/25 into a decimal
Sports Stats - Fractions can be used here, but when comparing percentages, the clearest representation of success is from decimal points. Ex: A player's batting average: .333
When to convert 5.32 to 133/25 as a fraction
Cooking: When scrolling through pintress to find the perfect chocolate cookie recipe. The chef will not tell you to use .86 cups of chocolate chips. That brings confusion to the standard cooking measurement. It’s much clearer to say 42/50 cups of chocolate chips. And to take it even further, no one would use 42/50 cups. You’d see a more common fraction like ¾ or ?, usually in split by quarters or halves.
Practice Decimal Conversion with your Classroom
• If 133/25 = 5.32 what would it be as a percentage?
• What is 1 + 133/25 in decimal form?
• What is 1 - 133/25 in decimal form?
• If we switched the numerator and denominator, what would be our new fraction?
• What is 5.32 + 1/2?
|
# Is this even or odd?
Note: There is not been a vanilla parity test challenge yet (There is a C/C++ one but that disallows the ability to use languages other than C/C++, and other non-vanilla ones are mostly closed too), So I am posting one.
Given a positive integer, output its parity (i.e. if the number is odd or even) in truthy/falsy values. You may choose whether truthy results correspond to odd or even inputs.
# Examples
Assuming True/False as even and odd (This is not required, You may use other Truthy/Falsy values for each), responsively:
(Input):(Output)
1:False
2:True
16384:True
99999999:False
# Leaderboard
var QUESTION_ID=113448,OVERRIDE_USER=64499;function answersUrl(e){return"https://api.stackexchange.com/2.2/questions/"+QUESTION_ID+"/answers?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+ANSWER_FILTER}function commentUrl(e,s){return"https://api.stackexchange.com/2.2/answers/"+s.join(";")+"/comments?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+COMMENT_FILTER}function getAnswers(){jQuery.ajax({url:answersUrl(answer_page++),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){answers.push.apply(answers,e.items),answers_hash=[],answer_ids=[],e.items.forEach(function(e){e.comments=[];var s=+e.share_link.match(/\d+/);answer_ids.push(s),answers_hash[s]=e}),e.has_more||(more_answers=!1),comment_page=1,getComments()}})}function getComments(){jQuery.ajax({url:commentUrl(comment_page++,answer_ids),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){e.items.forEach(function(e){e.owner.user_id===OVERRIDE_USER&&answers_hash[e.post_id].comments.push(e)}),e.has_more?getComments():more_answers?getAnswers():process()}})}function getAuthorName(e){return e.owner.display_name}function process(){var e=[];answers.forEach(function(s){var r=s.body;s.comments.forEach(function(e){OVERRIDE_REG.test(e.body)&&(r="<h1>"+e.body.replace(OVERRIDE_REG,"")+"</h1>")});var a=r.match(SCORE_REG);a&&e.push({user:getAuthorName(s),size:+a[2],language:a[1],link:s.share_link})}),e.sort(function(e,s){var r=e.size,a=s.size;return r-a});var s={},r=1,a=null,n=1;e.forEach(function(e){e.size!=a&&(n=r),a=e.size,++r;var t=jQuery("#answer-template").html();t=t.replace("{{PLACE}}",n+".").replace("{{NAME}}",e.user).replace("{{LANGUAGE}}",e.language).replace("{{SIZE}}",e.size).replace("{{LINK}}",e.link),t=jQuery(t),jQuery("#answers").append(t);var o=e.language;/<a/.test(o)&&(o=jQuery(o).text()),s[o]=s[o]||{lang:e.language,user:e.user,size:e.size,link:e.link}});var t=[];for(var o in s)s.hasOwnProperty(o)&&t.push(s[o]);t.sort(function(e,s){var F=function(a){return a.lang.replace(/<\/?a.*?>/g,"").toLowerCase()},el=F(e),sl=F(s);return el>sl?1:el<sl?-1:0});for(var c=0;c<t.length;++c){var i=jQuery("#language-template").html(),o=t[c];i=i.replace("{{LANGUAGE}}",o.lang).replace("{{NAME}}",o.user).replace("{{SIZE}}",o.size).replace("{{LINK}}",o.link),i=jQuery(i),jQuery("#languages").append(i)}}var ANSWER_FILTER="!t)IWYnsLAZle2tQ3KqrVveCRJfxcRLe",COMMENT_FILTER="!)Q2B_A2kjfAiU78X(md6BoYk",answers=[],answers_hash,answer_ids,answer_page=1,more_answers=!0,comment_page;getAnswers();var SCORE_REG=/<h\d>\s*([^\n,]*[^\s,]),.*?(\d+)(?=[^\n\d<>]*(?:<(?:s>[^\n<>]*<\/s>|[^\n<>]+>)[^\n\d<>]*)*<\/h\d>)/,OVERRIDE_REG=/^Override\s*header:\s*/i;
body{text-align:left!important}#answer-list,#language-list{padding:10px;width:290px;float:left}table thead{font-weight:700}table td{padding:5px}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.sstatic.net/codegolf/all.css?v=83c949450c8b"> <div id="answer-list"> <h2>Leaderboard</h2> <table class="answer-list"> <thead> <tr><td></td><td>Author</td><td>Language</td><td>Size</td></tr></thead> <tbody id="answers"> </tbody> </table> </div><div id="language-list"> <h2>Winners by Language</h2> <table class="language-list"> <thead> <tr><td>Language</td><td>User</td><td>Score</td></tr></thead> <tbody id="languages"> </tbody> </table> </div><table style="display: none"> <tbody id="answer-template"> <tr><td>{{PLACE}}</td><td>{{NAME}}</td><td>{{LANGUAGE}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table> <table style="display: none"> <tbody id="language-template"> <tr><td>{{LANGUAGE}}</td><td>{{NAME}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table>
• This isn't the first time I've confused mathematical with computational parity... this is a code site after all! – Neil Mar 21 '17 at 10:38
• Since this is pretty much one of these(1,2,3) questions, it should probably have a snippet to see all the answers. – fəˈnɛtɪk Mar 21 '17 at 13:57
• @MikeBufardeci Because "catalogue" is spelled differently based on which country you're from. For those of us in the U.S., it's "catalog". "Leaderboard" is culture-invariant. – mbomb007 Mar 21 '17 at 16:37
• @tuskiomi The challenge only asks about positive integers. (0 is considered even but not positive) – Calvin's Hobbies Mar 22 '17 at 2:46
• @LucioCrusca Welcome to PPCG! The basic idea of Code Golf is to make a program in the shortest form you can. This challenge is to read an integer (positive,non-zero), and output if it is even or odd. If you are confused with something, please visit The Nineteenth Byte and ask freely. Or if you are confused with the site's policy or rules, go to the Meta. Finally, Thanks for subscribing to our community! – Matthew Roh Mar 23 '17 at 15:09
# Carrot, 5 bytes
#^F%2
Try it online!
Note: this is a trivial answer. Please do not vote. I am only posting this for completion.
### Explanation
#^ // set the stack to the input
F // convert it to a float
%2 // take the modulo 2 of it
// implicit output
• That's okay. Every answer else is trivial too. At least it's not a builtin. – Matthew Roh Mar 23 '17 at 9:32
• Why do you have to convert to float? – Hello Goodbye Nov 14 '19 at 16:27
• @HelloGoodbye Because it is by default a string, although yes, I should work just as well – user41805 Nov 14 '19 at 16:57
• ah, makes sense – Hello Goodbye Nov 14 '19 at 17:05
# APL (Dyalog), 3 bytes
Full program body:
2|⎕
Try it online!
Alternatively:
2⊤⎕
Try it online!
| is modulus, ⊤ is base-convert, ⎕ is numeric input prompt. Handles any numeric array.
I've found 40 functions which can also do the job in 3 bytes. List and try them all online!
# Haxe, 22 bytes
function(x)return x%2;
Test it online!
Haxe is a high-level, strictly typed language designed to be compiled across many different platforms. While it doesn't have any form of lambda expressions, it does have one unique property that allows for interesting golfing: everything is an expression. This allows you to do interesting things that aren't possible in most similar languages, such as b>0&&return b. In this particular case, it allows us to remove the brackets that would normally be required in a function definition.
• High level language, low level answer (lol) – Matthew Roh Mar 23 '17 at 13:37
# Qwerty-RPN, 4 bytes
@2%#
Explanation:
@ Input number
% ...modulo...
2 2
# print
# Beeswax, 6 bytes
_2~,%{
Explanation:
_ Create a bee flying horizontally [0,0,0]
2 Set top to 2 [0,0,2]
~ Swap top and 2nd values [0,2,0]
, Take value from STDIN as int [0,2,7]
% Modulo: top % 2nd [0,0,1]
{ Print top [0,0,1]
Try it online!
## Ook!, 79 64 bytes
Update: 15 bytes shorter by removing the whitespaces, thanks to Okx.
This is a joke esoteric language, meant to be trivially isomorphic to brainfuck by substituting each command with an Orangutan phrase. This is my first time using it.
Ook.Ook.Ook!Ook?Ook.Ook!Ook.Ook?Ook.Ook!Ook?Ook!Ook?Ook.Ook!Ook.
Try it here! Give input as unary. Output is 1 for odd numbers, and nothing for even ones.
Explanation:
The above script is a direct translation of the 8 bytes brainfuck answer by @Dennis:
+[,>,]<.
Ook! has only 3 distinct syntax elements: Ook., Ook? and Ook!. These are combined into groups of two, and the various pair combinations are mapped to the brainfuck commands.
Substitution table:
> Ook.Ook?
< Ook?Ook.
+ Ook.Ook.
- Ook!Ook!
. Ook!Ook.
, Ook.Ook!
[ Ook!Ook?
] Ook?Ook!
• You can golf it by removing the newlines. – Okx Mar 23 '17 at 14:27
• @Okx On a second look, the online interpreter from my answer works with no whitespace delimiter. Cool! That means I can remove the spaces too. Thanks for pointing that out. I've read the wiki and the Ook! creator's page and only the newlines were said to be ignored, nothing about the spaces. – seshoumara Mar 23 '17 at 15:13
• Is there really any point to golf it? It is equivalent to BF, just with the byte count multiplied by 8, so golfing it is essentially the same as golfing BF. There's a reason it's not on TIO. – null Aug 14 '20 at 4:45
• Looking now at it, no, you're right. But back then if I wouldn't have done it, I wouldn't have known that it works also if you remove the newlines and especially the spaces. I don't think I made another answer in Ook since then. – seshoumara Aug 14 '20 at 20:26
## Gibberish, 8 bytes
eli2gmeo
eli - read line, convert to integer
2gm - push 2, push modulo of previous value by 2
eo - output stack
Outout 0 for even and 1 for odd
# J-uby, 6 bytes
:even?
In J-uby, Ruby's symbols are callable. Fixnum#even? in Ruby (predictably) returns whether a number is even or not. It can be called like so:
f = :even?
f.(2) #=> true
f.(3) #=> false
J, 8 chars
<&1@:|~&2
It's just a composition of
<&1
which flips 0 for 1, and
|~&2
which is mod 2.
The @: just composes the two functions together
• You don't need <&1@. It's up to us which Boolean corresponds to which parity. Also, 2&| saves a byte. Try it online! – Dennis Mar 25 '17 at 12:40
• Welcome to the site! – caird coinheringaahing Mar 31 '17 at 6:40
# Elisp, 12 10 bytes
(%(read)2)
(read) evaluates to the input, and the (% ...) expression is then evaluated. Outputs 1 for odd, 0 for even.
Test cases:
(Input):(Output)
1:1
2:0
16384:0
99999999:1
Edit: Saves 2 bytes thanks to @Dylan, for asking if it was possible to leave out the spaces in a Elisp expression. Turns out the answer is yes!
## Python REPL, 3 bytes
_&1
Explanation: Taking bitwise and with 1 gives True when even else False
• Just so you know, the downvote was cast automatically by the Community user when you edited your answer. I consider this a bug. – Dennis Mar 26 '17 at 15:08
• Note to voters: _ is the value of the last expression in an interactive session, so the code can be used like this. Per consensus, this is currently a valid form of taking input. – Dennis Mar 26 '17 at 16:08
• Welcome to PPCG! – Pavel Mar 26 '17 at 19:40
• @ГригорийПерельман thank you – Aashutosh Rathi Mar 28 '17 at 7:33
• @Dennis thanks for edit 👍 – Aashutosh Rathi Mar 28 '17 at 7:34
# J, 2 bytes
2| NB. remainder of arg / 2
So 0 = true (even), 1 = false (odd)
2| 7 => 1
2| 10 => 0
• This doesn't look right. I get a syntax error. I think you need another byte: 2|] or 2&| – Adám May 8 '17 at 8:55
# Acc!!, 43 bytes
N
Count i while _/10 {
N-_%2*2
}
Write _+41
Because of quirks of the input mechanism in Acc!!, the input number (given on stdin) has to be terminated with a signal value--in this case, a tab character. The code outputs 2 if the number is even, 0 if it is odd.
### Explanation
# Read a character's ASCII value into the accumulator
N
# Loop while accumulator intdiv 10 is greater than 0 (i.e. acc >= 10)
Count i while _/10 {
# Read another ASCII value, subtract (current acc % 2) * 2, and store back to acc
N-_%2*2
}
# At the end of the loop, we know we just read a tab character (ASCII 9). This means the
# acc value is 9 if the previous digit had an even ASCII value, or 7 if it was odd. We
# add 41 to convert to the ASCII codes of 2 and 0, respectively, and write to stdout.
Write _+41
# Scala, 9 bytes
(_:Int)%2
This outputs 1 for odd, 0 for even
# Clojure, 15 bytes
#(= 0(rem % 2))
Ignoring the obvious built-ins, this is probably as short as it gets.
Checks if the argument divided by 2 equals 0. Nothing fancy. Returns true for even numbers, false for odd.
Beaten by brainfuck!
# Dogescript, 32 29 bytes
such f much n
return n%2
wow
Returns 1 if n is odd, 0 otherwise.
• You could still keep the Dogescript syntax and save 3 bytes: such f much n return n%2 wow. There's no need to convert a truthy value to boolean. – Grant Miller Mar 31 '17 at 20:44
# REXX 13 BYTES
say arg(1)//2
Try it here
REXX functions and instructions
# Actually, 2 bytes
1&
Try it online!
Output is 0 if even, 1 if odd. The code is very straightforward: bitwise-AND (&) with 1 (return low bit of input).
# braingasm, 3 bytes
Takes input from stdin, prints 0 for even and 1 for odd.
;o:
; reads an integer, o checks the "oddity" (opposite of parity), : prints an integer.
edit: changed to p to o to retrofit to a breaking change in the language.
• Congratulations on being the 100th answer to this question :P – math junkie Mar 23 '17 at 14:19
# WC, 40 bytes
WIP mini language I created.
;>_0|;>(?#@8|!@3|//##@2|!@2|//#$-$-/)*$ Explanation: ;>_0| New var using the first artifact (the input number) ;>( Start a new function ? Select the first variable (reset var index) #@8| Start if statement with 9th global ("-1") as condition !@3| Print the 4th global ("1") // Terminate program # End if statement #@2| Start if statement with 3rd global ("0") as condition !@2| Print the 3rd global ("0") // Terminate program # End if statement$-$- Decrement the current variable twice / Restart context (the current function) ) End function *$ Call the current variable (the function)
Try it online!
• Cool looking language. Welcome to the site! :) – James Jun 16 '17 at 20:06
# ,,,, 1 byte
œ
Yeah, yeah, a builtin... For anybody wondering about the choice of character, it's supposed to be odd or even or o or e.
However, I do have other solutions.
## 2 bytes
1&
### Explanation
1&
implicit input
1 push 1
& pop input and 1 and push input & 1 (bitwise AND)
implicit output
## 2 bytes
2%
### Explanation
2%
implicit input
2 push 2
% pop input and 2 and push input % 2 (modulo)
implicit output
• œ yeah a new solution, and a new builtin – Matthew Roh Jul 12 '17 at 3:28
# x86_64 Linux machine language, 5 bytes
0: 97 xchg %edi,%eax
1: 83 e0 01 and $0x1,%eax 4: c3 retq To try it, compile and run the following C program #include<stdio.h> const char f[]="\x97\x83\xe0\1\xc3"; int main(){ for( int i = 0; i < 10; i++ ) { printf("%d %d\n", i, ((int(*)(int))f)(i) ); } } Try it online! # Aceto, 5 bytes ri2%p r reads input i converts to integer 2% takes the number modulo 2 p prints the answer Try it online! # Underload, 5 bytes ():aS Underload doesn't really have a concept of Truthy/Falsey, so this is empty/nonempty output for odd/even respectively. Input should be in unary (as some number of ~) between the a and the S. ## Explanation () pushes the empty string. : duplicates it and () wraps the top stack element in parenthesis, meaning the stack now looks like "", "()". The input now appears in the program, meaning it is executed as code. ~ is the swap instruction in Underload, meaning that an even input is an even number of swaps, which doesn't affect the stack, and an odd input is an odd number of swaps, which has the same effect as a single swap. S outputs the top stack element, which will be either () if the input was even, and if the output was odd. I mentioned that Underload doesn't have Truthy/Falsey, but the common way to represent booleans as Underload code that either swaps or doesn't swap, meaning that the cheaty one-byte program S technically works as a valid submission. # Pyt, 2 bytes 2% Explanation Implicit input 2% Mod 2 Implicit print Odd is truthy, even is falsy Try it online! Alternatively, also 2 bytes 2| Explanation: Implicit input 2| Is it divisible by 2? Implicit output Even is truthy, odd is falsy Try it online! # Whitespace, 30 bytes [S S S T S N _Push_2][S N S _Duplicate][S N S _Duplicate][T N T T _Read_STDIN_as_number][T T T _Retrieve][S N T _Swap_top_two][T S T T _Modulo][T N S T _Output] Letters S (space), T (tab), and N (new-line) added as highlighting only. [..._some_action] added as explanation only. Outputs 0 for even, 1 for odd. Try it online. Explanation (with 5 as input): Command Explanation Stack Heap STDIN STDOUT SSSTSN Push 2 [2] {} SNS Duplicate top (2) [2,2] {} SNS Duplicate top (2) [2,2,2] {} TNTT Read STDIN as number [2,2] {2:5} 5 TTT Retrieve [2,5] {2:5} SNT Swap top two [5,2] {2:5} TSTT Modulo [1] {2:5} TNST Output top as number [] {2:5} 1 # Python 2, 13 bytes lambda n:~n&1 Try it online! No shorter than the other non-REPL Python answers, but uses slightly different logic. Takes the bitwise complement of the input number (-x-1) and performs a BITWISE AND on it. Returns 1 if the complement is odd, and 0 if the complement is even - and therefore returns 1/0 if the original number was even/odd. (Could be shortened by 1 byte by removing the complement ~ if 0=True and 1=False is allowable.) # Flobnar, 6 bytes & %@ 2 Try it online! (requires the -d flag) I don't think it's going to get much shorter than this. ## CHIP-8 assembly, 22 bytes 0x6001 F10A 410F 120C 8210 1202 8226 8F03 FF29 D005 1214 Takes in a number entered by the user, until it is terminated by the user pressing the F key, and prints out to the screen 0 if it's odd, or 1 if it's even. The CHIP-8 had 16 8-bit registries (V0 to VF), with VF mostly being used for carry operations. The ROM was loaded into memory at address 0x200, which is why the jump operations are offset. It also contained display-representations of the numbers 0-9 and letters A-F in memory at 0x00, so that a programmer need not create them. The code works like this: 0x200: 60 01 Set V0 to 0x01. 0x202: F1 0A Wait for a key to be pressed, and enters the value into V1. 0x204: 41 0F Skip the next instruction if V1 does not equal 0x0F. 0x206: 12 0C Jump to instruction address 0x20C. 0x208: 82 10 Assign the value of V1 into V2. 0x20A: 12 02 Jump to instruction address 0x202. 0x20C: 82 26 Store the least significant bit of V2 into VF, left-shift V2 by one bit, and store the shifted value into V2. 0x20E: 8F 03 Set VF to VF XOR V0 (V0 is set previously to 1, so this is equivalent to NOT VF). 0x210: FF 29 Set the memory address pointer (I) to the character representation of the value in VF. 0x212: D0 05 Draw the data from the address pointer I to the screen, at pixel (0, 0), with a width of 5 pixels (the height is always 8 pixels). 0x214: 12 14 Jump to instruction address 0x214 (i.e., loop indefinitely). # Z80Golf, 9 8 bytes 00000000: d5d2 0380 e601 ff76 .......v Try it online! ### Disassembly start: push de jp nc,$8003
and 1
rst $38 halt Golfed a byte using the "input loop at start" pattern: push de ; Push 00 00 (return address) to the stack jp nc,$8003 ; Escape the loop if carry is set (EOF)
; otherwise take next input and return to the start of the program
## Previous solution, 9 bytes
00000000: cd03 8030 fbe6 01ff 76 ...0....v
Try it online!
### Disassembly
start:
call $8003 jr nc, start and 1 rst$38
halt
Accepts the decimal input (binary, octal, hexadecimal, or any other even base would work), and outputs ASCII 0 for even, 1 for odd.
call $8003 calls getchar, which stores the next char to register a, or sets the carry flag on EOF. Since the only significant data is the last char, the program simply calls getchar repeatedly until EOF. and 1 takes the parity, rst$38 is a golf idiom for call putchar, and halt terminates the program.
## 9 bytes, Human-readable output
00000000: cd03 8030 fbe6 31ff 76 ...0....v
Try it online!
Limiting the input to decimal or lower even bases, we can get more readable result ('0' = $30 and '1' =$31) by changing and 1 into and \$31. Also works for hexadecimal if you use 0123456789pqrstu, case sensitive :)
|
# RMS value of A.C
## Root Mean square value of AC
• We know that time average value of AC over one cycle is zero and it can be proved easily
• Instantaneous current I and time average of AC over half cycle could be positive for one half cycle and negative for another half cycle but quantity i2 would always remain positive
• So time average of quantity i2 is
This is known as the mean square current
• The square root of mean square current is called root mean square current or rms current.
Thus,
thus ,the rms value of AC is .707i0 of the peak value of alternating current
• Similarly rms value of alternating voltage or emf is
• If we allow the AC current represented by i=i0sin(ωt+φ) to pass through a resistor of resistance R,the power dissipated due to flow of current would be
P=i2R
• Since magnitude of current changes with time ,the power dissipation in circuit also changes
• The average Power dissipated over one complete current cycle would be
If we pass direct current of magnitude irms through the resistor ,the power dissipate or rate of production of heat in this case would be
P=(irms)2R
• Thus rms value of AC is that value of steady current which would dissipate the same amount of power in a given resistance in a given tine as would gave been dissipated by alternating current
• This is why rms value of AC is also known as virtual value of current
Note to our visitors :-
Thanks for visiting our website.
|
# 22.6: Assigning Oxidation Numbers
Once we move from the element iron to iron compounds, we need to be able to designate clearly the form of the iron ion. An example of this is iron that has been oxidized to form iron oxide during the process of rusting. Although Antoine Lavoisier first began the idea of oxidation as a concept, it was Wendell Latimer (1893 - 1955) who gave us the modern concept of oxidation numbers. His 1938 book The Oxidation States of the Elements and Their Potentials in Aqueous Solution laid out the concept in detail. Latimer was a well-known chemist who later became a member of the National Academy of Sciences. Not bad for a gentleman who started college planning on being a lawyer.
## Assigning Oxidation Numbers
The oxidation number is a positive or negative number that is assigned to an atom to indicate its degree of oxidation or reduction. In oxidation-reduction processes, the driving force for chemical change is in the exchange of electrons between chemical species. A series of rules have been developed to help us.
1. For free elements (uncombined state), each atom has an oxidation number of zero. $$\ce{H_2}$$, $$\ce{Br_2}$$, $$\ce{Na}$$, $$\ce{Be}$$, $$\ce{K}$$, $$\ce{O_2}$$, $$\ce{P_4}$$, all have oxidation number of 0.
2. Monatomic ions have oxidation numbers equal to their charge. $$\ce{Li^+} = +1$$, $$\ce{Ba^{2+}} = +2$$, $$\ce{Fe^{3+}} = +3$$, $$\ce{I^-} = -1$$, $$\ce{O^{2-}} = -2$$, etc. Alkali metal oxidation numbers $$= +1$$. Alkaline earth oxidation numbers $$= +2$$. Aluminum $$= +3$$ in all of its compounds. Oxygen's oxidation number $$= -2$$ except when in hydrogen peroxide $$\left( \ce{H_2O_2} \right)$$, or a peroxide ion $$\left( \ce{O_2^{2-}} \right)$$ where it is $$-1$$.
3. Hydrogen's oxidation number is $$+1$$, except for when bonded to metals as the hydride ion forming binary compounds. In $$\ce{LiH}$$, $$\ce{NaH}$$, and $$\ce{CaH_2}$$, the oxidation number is $$-1$$.
4. Fluorine has an oxidation number of $$-1$$ in all of its compounds.
5. Halogens ($$\ce{Cl}$$, $$\ce{Br}$$, $$\ce{I}$$) have negative oxidation numbers when they form halide compounds. When combined with oxygen, they have positive numbers. In the chlorate ion $$\left( \ce{ClO_3^-} \right)$$, the oxidation number of $$\ce{Cl}$$ is $$+5$$, and the oxidation number of $$\ce{O}$$ is $$-2$$.
6. In a neutral atom or molecule, the sum of the oxidation numbers must be 0. In a polyatomic ion, the sum of he oxidation numbers of all the atoms in the ion must be equal to the charge on the ion.
Example 22.6.1
What is the oxidation number for manganese in the compound potassium permanganate $$\left( \ce{KMnO_4} \right)$$?
Solution:
The oxidation number for $$\ce{K}$$ is $$+1$$ (rule 2)
The oxidation number for $$\ce{O}$$ is $$-2$$ (rule 2)
Since this is a compound (there is no charge indicated on the molecule), the net charge on the molecule is zero (rule 6)
So we have
\begin{align} +1 + \ce{Mn} + 4 \left( -2 \right) &= 0 \\ \ce{Mn} - 7 &= 0 \\ \ce{Mn} &= +7 \end{align}
When dealing with oxidation numbers, we must always include the charge on the atom.
Another way to determine the oxidation number of $$\ce{Mn}$$ in this compound is to recall that the permanganate anion $$\left( \ce{MnO_4^-} \right)$$ has a charge of $$-1$$. In this case:
\begin{align} \ce{Mn} + 4 \left( -2 \right) &= -1 \\ \ce{Mn} - 8 &= -1 \\ \ce{Mn} &= +7 \end{align}
Example 22.6.2
What is the oxidation number for iron in $$\ce{Fe_2O_3}$$?
Solution:
\begin{align} &\ce{O} \: \text{is} \: -2 \: \left( \text{rule 2} \right) \\ &2 \ce{Fe} + 3 \left( -2 \right) = 0 \\ &2 \ce{Fe} = 6 \\ &\ce{Fe} = 3 \end{align}
If we have the compound $$\ce{FeO}$$, then $$\ce{Fe} + \left( -2 \right) = 0$$ and $$\ce{Fe} = 2$$. Iron is one of those materials that can have more than one oxidation number.
The halogens (except for fluorine) can also have more than one number. In the compound $$\ce{NaCl}$$, we know that $$\ce{Na}$$ is $$+1$$, so $$\ce{Cl}$$ must be $$-1$$. But what about $$\ce{NaClO_3}$$?
\begin{align} \ce{Na} &= 1 \\ \ce{O} &= -2 \\ 1 + \ce{Cl} + 3 \left( -2 \right) &= 0 \\ 1 + \ce{Cl} - 6 &= 0 \\ \ce{Cl} - 5 &= 0 \\ \ce{Cl} &= +5 \end{align}
Not quite what we expected, but $$\ce{Cl}$$, $$\ce{Br}$$, and $$\ce{I}$$ will exhibit multiple oxidation numbers in compounds.
## Summary
• Rules for determining oxidation numbers are listed.
• Examples of oxidation number determinations are provided.
## Contributors
• CK-12 Foundation by Sharon Bewick, Richard Parsons, Therese Forsythe, Shonna Robinson, and Jean Dupon.
|
Monday, December 17, 2012
Storage: Active spares in RAID volumes
If you have a spare HDD in a chassis powered-up and spinning, then the best use of the power you're burning is to use the drive.
Sunday, November 18, 2012
Cross compiling i386-ELF Linux kernel on OS/X, Snow Leopard
This is NOT a tutorial on 'C', Software Development, the GNU toolchains, including 'make', or building a linux kernel. That is assumed knowledge.
There's already good documentation on the Web for building Android (ARM) kernels on OS/X, and some tantalising, though incomplete, comments on building i386 kernels: "it took a while to setup, then was OK"...
Although OS/X runs on x86 (and x86_64), it won't build an x86 Linux kernel. You still need to cross-compile because Linux uses ELF (extensible loader format) and OS/X uses its own multi-CPU format, "mach-o".
This environment variable that must be set:
CROSS_COMPILE=i386-elf- [or the full path to your gcc tools, but only the common prefix]
Optionally, you can set (32-bit):
ARCH=x86
As well, I added these directories to the beginning of my PATH to catch gcc (HOSTCC) and i386-elf-gcc for the cross-compiler:
PATH=/opt/local/bin:/opt/local/sbin:/opt/local/i386-elf/bin:/opt/local/libexec/gcc/i386-elf/4.3.2:$PATH The Linux kernel Makefile uses some trickery to have verbose, quiet and silent modes, default is "quiet". If you need to see for debugging, the commands issued, set this additional environment variable: KBUILD_VERBOSE=1 I chose to use the Macports native 'gcc', not the OS/X supplied compiler. Because the Macport i386-elf version of gcc has incorrect paths compiled in, I needed the 2 additional i386-elf directories. Note: I had to make a symbolic link for i386-elf-gcc. The port "i386-elf-gcc @4.3.2_1" installed the full set of tools (as, ld, nm, strip, ...) into /opt/local/bin, but didn't install the shortname ('gcc'), only the long version name: i386-elf-gcc-4.3.2, which the Linux kernel Makefile doesn't cater for. The 'Macports' project provides many GNU tools pre-built, with source. Generally, it's a good first thing to try. I reported multiple faults and found them unresponsive and less than helpful. YMMV. The command is 'port', after the BSD tool of the same name. BSD delivered pure-source bundles, Macports do not. While Macports notionally updates itself, I had trouble with a major upgrade, initially available only as Source, now available as a binary upgrade. There seems to be a bias towards newer OS/X environments. "Snow Leopard", Darwin 10.8.0, is now old. "Lion" and "Mountain Lion" have replaced it... Ben Collins, 2010, has good notes, a working elf.h, and gcc-4.3.3 and binutils from Ubuntu Jaunty. A comment suggests GNU sed is necessary, not the standard OS/X sed. I made this change. From 2010, using 'ports' to cross-compile to ARM by Plattan Mattan is useful. Uses OS/X gcc as HOSTCC. The page suggests installing ports: install libelf git-core Then using git to clone the kernel source. I installed ports: gcc43 @4.3.6_7 (active) i386-elf-binutils @2.20_0 (active) i386-elf-gcc @4.3.2_1 (active) libelf @0.8.13_2 (active) Other useful pages: Building GCC toolchain for ARM on Snow Leopard (using Macports) Building i386-elf cross compiler and binutils on OS/X (from source). I got a necessary hint from Alan Modra about i386-elf-as (the assembler) processing "/" as comments, not "divide" in macro expansions, as expected in the kernel source: For compatibility with other assemblers, '/' starts a comment on the i386-elf target. So you can't use division. If you configure for i386-linux (or any of the bsds, or netware), you won't have this problem. Do NOT in your .config file select "a.out" as an executable fileformat. The i386 processor isn't defined, so compile fails with "SEGMNT_SIZE" not defined. I went to kernel.org and downloaded a bzipped tar file of linux-2.6.34.13 ("Full Source") for my testing. Always a good idea to check the MD5 of any download, if available. I wanted a stable, older kernel to test with.Your needs will vary. I didn't run into the "malloc.h" problem noted by Plattan, it seemed to come with libelf. I made three sets of changes (changes in red) to the standard linux Makefile (can apply as a patch): mini-too:linux-2.6.34.13 steve$ diff -u ../saved/Makefile.dist Makefile
--- ../saved/Makefile.dist 2012-08-21 04:45:22.000000000 +1000
+++ Makefile 2012-11-20 14:10:46.000000000 +1100
@@ -231,7 +231,7 @@
HOSTCC = gcc
HOSTCXX = g++
-HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer
+HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -idirafter /opt/local/include
HOSTCXXFLAGS = -O2
# Decide whether to build built-in, modular, or both.
@@ -335,10 +335,10 @@
-Wbitwise -Wno-return-void $(CF) MODFLAGS = -DMODULE CFLAGS_MODULE =$(MODFLAGS)
-AFLAGS_MODULE = $(MODFLAGS) +AFLAGS_MODULE =$(MODFLAGS) -Wa,--divide
LDFLAGS_MODULE = -T $(srctree)/scripts/module-common.lds CFLAGS_KERNEL = -AFLAGS_KERNEL = +AFLAGS_KERNEL = -Wa,--divide CFLAGS_GCOV = -fprofile-arcs -ftest-coverage @@ -354,6 +354,7 @@ -fno-strict-aliasing -fno-common \ -Werror-implicit-function-declaration \ -Wno-format-security \ + -isystem /opt/local/i386-elf/include -idirafter /opt/local/lib/gcc/i386-elf/4.3.2/include/ -idirafter /usr/include -idirafter /usr/include/i386 \ -fno-delete-null-pointer-checks KBUILD_AFLAGS := -D__ASSEMBLY__ I created the required "elf.h", not supplied in port libelf, in /opt/local/include, specified above in HOSTCFLAGS: mini-too:linux-2.6.34.13 steve$ cat /opt/local/include/elf.h
/* @(#) $Id:$ */
#ifndef _ELF_H
#define _ELF_H
#include <libelf/gelf.h>
/* http://plattanimattan.blogspot.com.au/2010/04/cross-compiling-linux-on-mac-osx.html */
#define R_ARM_NONE 0
#define R_ARM_PC24 1
#define R_ARM_ABS32 2
#define R_MIPS_NONE 0
#define R_MIPS_16 1
#define R_MIPS_32 2
#define R_MIPS_REL32 3
#define R_MIPS_26 4
#define R_MIPS_HI16 5
#define R_MIPS_LO16 6
/* from /opt/local/libexec/llvm-3.1/include/llvm/Support/ELF.h */
/* or http://www.swissdisk.com/~bcollins/macosx/elf.h */
#define R_386_NONE 0
#define R_386_32 1
#define R_386_PC32 2
#define R_386_GOT32 3
#define R_386_PLT32 4
#define R_386_COPY 5
#define R_386_GLOB_DAT 6
#define R_386_JMP_SLOT 7 /* was R_386_JUMP_SLOT */
#define R_386_RELATIVE 8
#define R_386_GOTOFF 9
#define R_386_GOTPC 10
#define R_386_32PLT 11
#define R_386_TLS_TPOFF 14
#define R_386_TLS_IE 15
#define R_386_TLS_GOTIE 16
#define R_386_TLS_LE 17
#define R_386_TLS_GD 18
#define R_386_TLS_LDM 19
#define R_386_16 20
#define R_386_PC16 21
#define R_386_8 22
#define R_386_PC8 23
#define R_386_TLS_GD_32 24
#define R_386_TLS_GD_PUSH 25
#define R_386_TLS_GD_CALL 26
#define R_386_TLS_GD_POP 27
#define R_386_TLS_LDM_32 28
#define R_386_TLS_LDM_PUSH 29
#define R_386_TLS_LDM_CALL 30
#define R_386_TLS_LDM_POP 31
#define R_386_TLS_LDO_32 32
#define R_386_TLS_IE_32 33
#define R_386_TLS_LE_32 34
#define R_386_TLS_DTPMOD32 35
#define R_386_TLS_DTPOFF32 36
#define R_386_TLS_TPOFF32 37
#define R_386_TLS_GOTDESC 39
#define R_386_TLS_DESC_CALL 40
#define R_386_TLS_DESC 41
#define R_386_IRELATIVE 42
#define R_386_NUM 43
#endif /* _ELF_H */
I didn't try specifying all variables on the command-line when invoking make. This might work, though incomplete (only 2 of the 3 changes):
$make ARCH=x86 CROSS_COMPILE=i386-elf- HOSTCFLAGS="-idirafter /Users/steve/src/linux/linux-2.6.34.13/include/linux" AFLAGS_KERNEL="-Wa,--divide" It took me sometime to figure out how to browse on-line the kernel.org git repository for my specific kernel, to investigate the change history of a specific file. It's worth taking the time to learn this. I was not able to figure out how to get the GNU make in OS/X to show me the full commands it was about to execute. "make -d" spits out a mountain of stuff on what it is doing, but not the commands. "make -n" is for dry-runs and prints commands, though whether or not that's what is run later, I'm not sure. See KBUILD_VERBOSE. It also seems impossible to ask gcc to tell you what directory/ies it's using for the system include files. Whilst the Macports gcc works, it uses /usr/include, the default OS/X gcc directory and creates some additional header files. Current status: 19-Nov-2012. failing in drivers/gpu with include files missing. Which "CC"? CC drivers/gpu/drm/drm_auth.o In file included from include/drm/drmP.h:75, from drivers/gpu/drm/drm_auth.c:36: include/drm/drm.h:47:24: error: sys/ioccom.h: No such file or directory include/drm/drm.h:48:23: error: sys/types.h: No such file or directory Final status: 20-Nov-2012. Untested - booting kernel. BUILD arch/x86/boot/bzImage Root device is (14, 1) Setup is 12076 bytes (padded to 12288 bytes). System is 3763 kB CRC d747d6db Kernel: arch/x86/boot/bzImage is ready (#1) Building modules, stage 2. MODPOST 2 modules CC arch/x86/kernel/test_nx.mod.o LD [M] arch/x86/kernel/test_nx.ko CC drivers/scsi/scsi_wait_scan.mod.o LD [M] drivers/scsi/scsi_wait_scan.ko real 11m5.195s user 8m31.722s sys 1m50.243s A number of header errors (missing or duplicates & incompatible decls) were not solved, but sidestepped by unselected the problem areas in the .config file (details below). Additional changes: Creating config file. make defconfig producing a .config that can be edited or patched. Summary of subsystem items unselected in .config afterwards: # CONFIG_SUSPEND is not set # CONFIG_HIBERNATION is not set # CONFIG_ACPI is not set # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set # CONFIG_INET_LRO is not set # CONFIG_NETFILTER_XT_TARGET_CONNSECMARK is not set # CONFIG_NETFILTER_XT_TARGET_MARK is not set # CONFIG_NETFILTER_XT_TARGET_NFLOG is not set # CONFIG_NETFILTER_XT_TARGET_SECMARK is not set # CONFIG_NETFILTER_XT_TARGET_TCPMSS is not set # CONFIG_AGP is not set # CONFIG_DRM is not set # CONFIG_FB_CFB_FILLRECT is not set # CONFIG_FB_CFB_COPYAREA is not set # CONFIG_FB_CFB_IMAGEBLIT is not set You might try saving the patches below (a 'diff -u' of the above defconfig result to mine) and apply to the .config file as a patch: patch .config defconfig.patch --- ../saved/defconfig 2012-11-19 23:20:04.000000000 +1100 +++ .config 2012-11-19 23:32:49.000000000 +1100 @@ -1,7 +1,7 @@ # # Automatically generated make config: don't edit # Linux kernel version: 2.6.34.13 -# Mon Nov 19 18:57:25 2012 +# Mon Nov 19 15:53:57 2012 # # CONFIG_64BIT is not set CONFIG_X86_32=y @@ -390,7 +390,6 @@ CONFIG_X86_PAT=y CONFIG_ARCH_USES_PG_UNCACHED=y CONFIG_ARCH_RANDOM=y -CONFIG_EFI=y CONFIG_SECCOMP=y # CONFIG_CC_STACKPROTECTOR is not set # CONFIG_HZ_100 is not set @@ -401,7 +400,6 @@ CONFIG_SCHED_HRTICK=y CONFIG_KEXEC=y CONFIG_CRASH_DUMP=y -# CONFIG_KEXEC_JUMP is not set CONFIG_PHYSICAL_START=0x1000000 CONFIG_RELOCATABLE=y CONFIG_X86_NEED_RELOCS=y @@ -418,45 +416,11 @@ CONFIG_PM_DEBUG=y # CONFIG_PM_ADVANCED_DEBUG is not set # CONFIG_PM_VERBOSE is not set -CONFIG_CAN_PM_TRACE=y -CONFIG_PM_TRACE=y -CONFIG_PM_TRACE_RTC=y -CONFIG_PM_SLEEP_SMP=y -CONFIG_PM_SLEEP=y -CONFIG_SUSPEND=y -# CONFIG_PM_TEST_SUSPEND is not set -CONFIG_SUSPEND_FREEZER=y -CONFIG_HIBERNATION_NVS=y -CONFIG_HIBERNATION=y -CONFIG_PM_STD_PARTITION="" +# CONFIG_SUSPEND is not set +# CONFIG_HIBERNATION is not set # CONFIG_PM_RUNTIME is not set -CONFIG_PM_OPS=y -CONFIG_ACPI=y -CONFIG_ACPI_SLEEP=y -CONFIG_ACPI_PROCFS=y -CONFIG_ACPI_PROCFS_POWER=y -# CONFIG_ACPI_POWER_METER is not set -CONFIG_ACPI_SYSFS_POWER=y -CONFIG_ACPI_PROC_EVENT=y -CONFIG_ACPI_AC=y -CONFIG_ACPI_BATTERY=y -CONFIG_ACPI_BUTTON=y -CONFIG_ACPI_VIDEO=y -CONFIG_ACPI_FAN=y -CONFIG_ACPI_DOCK=y -CONFIG_ACPI_PROCESSOR=y -CONFIG_ACPI_HOTPLUG_CPU=y -# CONFIG_ACPI_PROCESSOR_AGGREGATOR is not set -CONFIG_ACPI_THERMAL=y -# CONFIG_ACPI_CUSTOM_DSDT is not set -CONFIG_ACPI_BLACKLIST_YEAR=0 -# CONFIG_ACPI_DEBUG is not set -# CONFIG_ACPI_PCI_SLOT is not set -CONFIG_X86_PM_TIMER=y -CONFIG_ACPI_CONTAINER=y -# CONFIG_ACPI_SBS is not set +# CONFIG_ACPI is not set # CONFIG_SFI is not set -# CONFIG_APM is not set # # CPU Frequency scaling @@ -479,11 +443,8 @@ # # CPUFreq processor drivers # -# CONFIG_X86_PCC_CPUFREQ is not set -CONFIG_X86_ACPI_CPUFREQ=y # CONFIG_X86_POWERNOW_K6 is not set # CONFIG_X86_POWERNOW_K7 is not set -# CONFIG_X86_POWERNOW_K8 is not set # CONFIG_X86_GX_SUSPMOD is not set # CONFIG_X86_SPEEDSTEP_CENTRINO is not set # CONFIG_X86_SPEEDSTEP_ICH is not set @@ -491,7 +452,6 @@ # CONFIG_X86_P4_CLOCKMOD is not set # CONFIG_X86_CPUFREQ_NFORCE2 is not set # CONFIG_X86_LONGRUN is not set -# CONFIG_X86_LONGHAUL is not set # CONFIG_X86_E_POWERSAVER is not set # @@ -513,9 +473,7 @@ CONFIG_PCI_GOANY=y CONFIG_PCI_BIOS=y CONFIG_PCI_DIRECT=y -CONFIG_PCI_MMCONFIG=y CONFIG_PCI_DOMAINS=y -# CONFIG_DMAR is not set CONFIG_PCIEPORTBUS=y # CONFIG_HOTPLUG_PCI_PCIE is not set CONFIG_PCIEAER=y @@ -528,7 +486,6 @@ # CONFIG_PCI_STUB is not set CONFIG_HT_IRQ=y # CONFIG_PCI_IOV is not set -CONFIG_PCI_IOAPIC=y CONFIG_ISA_DMA_API=y # CONFIG_ISA is not set # CONFIG_MCA is not set @@ -556,7 +513,6 @@ # CONFIG_HOTPLUG_PCI_FAKE is not set # CONFIG_HOTPLUG_PCI_COMPAQ is not set # CONFIG_HOTPLUG_PCI_IBM is not set -# CONFIG_HOTPLUG_PCI_ACPI is not set # CONFIG_HOTPLUG_PCI_CPCI is not set # CONFIG_HOTPLUG_PCI_SHPC is not set @@ -564,7 +520,7 @@ # Executable file formats / Emulations # CONFIG_BINFMT_ELF=y -CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y +# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set CONFIG_HAVE_AOUT=y # CONFIG_BINFMT_AOUT is not set CONFIG_BINFMT_MISC=y @@ -610,7 +566,7 @@ # CONFIG_INET_XFRM_MODE_TRANSPORT is not set # CONFIG_INET_XFRM_MODE_TUNNEL is not set # CONFIG_INET_XFRM_MODE_BEET is not set -CONFIG_INET_LRO=y +# CONFIG_INET_LRO is not set # CONFIG_INET_DIAG is not set CONFIG_TCP_CONG_ADVANCED=y # CONFIG_TCP_CONG_BIC is not set @@ -671,11 +627,11 @@ CONFIG_NF_CONNTRACK_SIP=y CONFIG_NF_CT_NETLINK=y CONFIG_NETFILTER_XTABLES=y -CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y -CONFIG_NETFILTER_XT_TARGET_MARK=y -CONFIG_NETFILTER_XT_TARGET_NFLOG=y -CONFIG_NETFILTER_XT_TARGET_SECMARK=y -CONFIG_NETFILTER_XT_TARGET_TCPMSS=y +# CONFIG_NETFILTER_XT_TARGET_CONNSECMARK is not set +# CONFIG_NETFILTER_XT_TARGET_MARK is not set +# CONFIG_NETFILTER_XT_TARGET_NFLOG is not set +# CONFIG_NETFILTER_XT_TARGET_SECMARK is not set +# CONFIG_NETFILTER_XT_TARGET_TCPMSS is not set CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y CONFIG_NETFILTER_XT_MATCH_MARK=y CONFIG_NETFILTER_XT_MATCH_POLICY=y @@ -853,13 +809,6 @@ CONFIG_PROC_EVENTS=y # CONFIG_MTD is not set # CONFIG_PARPORT is not set -CONFIG_PNP=y -CONFIG_PNP_DEBUG_MESSAGES=y - -# -# Protocols -# -CONFIG_PNPACPI=y CONFIG_BLK_DEV=y # CONFIG_BLK_DEV_FD is not set # CONFIG_BLK_CPQ_DA is not set @@ -950,7 +899,6 @@ CONFIG_ATA=y # CONFIG_ATA_NONSTANDARD is not set CONFIG_ATA_VERBOSE_ERROR=y -CONFIG_ATA_ACPI=y CONFIG_SATA_PMP=y CONFIG_SATA_AHCI=y # CONFIG_SATA_SIL24 is not set @@ -969,7 +917,6 @@ # CONFIG_SATA_VIA is not set # CONFIG_SATA_VITESSE is not set # CONFIG_SATA_INIC162X is not set -# CONFIG_PATA_ACPI is not set # CONFIG_PATA_ALI is not set CONFIG_PATA_AMD=y # CONFIG_PATA_ARTOP is not set @@ -1062,7 +1009,6 @@ # CONFIG_EQUALIZER is not set # CONFIG_TUN is not set # CONFIG_VETH is not set -# CONFIG_NET_SB1000 is not set # CONFIG_ARCNET is not set CONFIG_PHYLIB=y @@ -1364,7 +1310,6 @@ # CONFIG_INPUT_PCSPKR is not set # CONFIG_INPUT_APANEL is not set # CONFIG_INPUT_WISTRON_BTNS is not set -# CONFIG_INPUT_ATLAS_BTNS is not set # CONFIG_INPUT_ATI_REMOTE is not set # CONFIG_INPUT_ATI_REMOTE2 is not set # CONFIG_INPUT_KEYSPAN_REMOTE is not set @@ -1372,7 +1317,6 @@ # CONFIG_INPUT_YEALINK is not set # CONFIG_INPUT_CM109 is not set # CONFIG_INPUT_UINPUT is not set -# CONFIG_INPUT_WINBOND_CIR is not set # # Hardware I/O ports @@ -1420,7 +1364,6 @@ CONFIG_SERIAL_8250_CONSOLE=y CONFIG_FIX_EARLYCON_MEM=y CONFIG_SERIAL_8250_PCI=y -CONFIG_SERIAL_8250_PNP=y # CONFIG_SERIAL_8250_CS is not set CONFIG_SERIAL_8250_NR_UARTS=32 CONFIG_SERIAL_8250_RUNTIME_UARTS=4 @@ -1464,8 +1407,6 @@ # CONFIG_NSC_GPIO is not set # CONFIG_CS5535_GPIO is not set # CONFIG_RAW_DRIVER is not set -CONFIG_HPET=y -# CONFIG_HPET_MMAP is not set # CONFIG_HANGCHECK_TIMER is not set # CONFIG_TCG_TPM is not set # CONFIG_TELCLOCK is not set @@ -1475,7 +1416,6 @@ CONFIG_I2C_COMPAT=y # CONFIG_I2C_CHARDEV is not set CONFIG_I2C_HELPER_AUTO=y -CONFIG_I2C_ALGOBIT=y # # I2C Hardware Bus support @@ -1500,11 +1440,6 @@ # CONFIG_I2C_VIAPRO is not set # -# ACPI drivers -# -# CONFIG_I2C_SCMI is not set - -# # I2C system bus drivers (mostly embedded / system-on-chip) # # CONFIG_I2C_OCORES is not set @@ -1625,12 +1560,6 @@ # CONFIG_SENSORS_HDAPS is not set # CONFIG_SENSORS_LIS3_I2C is not set # CONFIG_SENSORS_APPLESMC is not set - -# -# ACPI drivers -# -# CONFIG_SENSORS_ATK0110 is not set -# CONFIG_SENSORS_LIS3LV02D is not set CONFIG_THERMAL=y # CONFIG_THERMAL_HWMON is not set CONFIG_WATCHDOG=y @@ -1713,42 +1642,19 @@ # # Graphics support # -CONFIG_AGP=y -# CONFIG_AGP_ALI is not set -# CONFIG_AGP_ATI is not set -# CONFIG_AGP_AMD is not set -CONFIG_AGP_AMD64=y -CONFIG_AGP_INTEL=y -# CONFIG_AGP_NVIDIA is not set -# CONFIG_AGP_SIS is not set -# CONFIG_AGP_SWORKS is not set -# CONFIG_AGP_VIA is not set -# CONFIG_AGP_EFFICEON is not set +# CONFIG_AGP is not set CONFIG_VGA_ARB=y CONFIG_VGA_ARB_MAX_GPUS=16 -# CONFIG_VGA_SWITCHEROO is not set -CONFIG_DRM=y -CONFIG_DRM_KMS_HELPER=y -# CONFIG_DRM_TDFX is not set -# CONFIG_DRM_R128 is not set -# CONFIG_DRM_RADEON is not set -# CONFIG_DRM_I810 is not set -# CONFIG_DRM_I830 is not set -CONFIG_DRM_I915=y -# CONFIG_DRM_I915_KMS is not set -# CONFIG_DRM_MGA is not set -# CONFIG_DRM_SIS is not set -# CONFIG_DRM_VIA is not set -# CONFIG_DRM_SAVAGE is not set +# CONFIG_DRM is not set # CONFIG_VGASTATE is not set CONFIG_VIDEO_OUTPUT_CONTROL=y CONFIG_FB=y # CONFIG_FIRMWARE_EDID is not set # CONFIG_FB_DDC is not set # CONFIG_FB_BOOT_VESA_SUPPORT is not set -CONFIG_FB_CFB_FILLRECT=y -CONFIG_FB_CFB_COPYAREA=y -CONFIG_FB_CFB_IMAGEBLIT=y +# CONFIG_FB_CFB_FILLRECT is not set +# CONFIG_FB_CFB_COPYAREA is not set +# CONFIG_FB_CFB_IMAGEBLIT is not set # CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set # CONFIG_FB_SYS_FILLRECT is not set # CONFIG_FB_SYS_COPYAREA is not set @@ -1773,13 +1679,11 @@ # CONFIG_FB_VGA16 is not set # CONFIG_FB_UVESA is not set # CONFIG_FB_VESA is not set -CONFIG_FB_EFI=y # CONFIG_FB_N411 is not set # CONFIG_FB_HGA is not set # CONFIG_FB_S1D13XXX is not set # CONFIG_FB_NVIDIA is not set # CONFIG_FB_RIVA is not set -# CONFIG_FB_I810 is not set # CONFIG_FB_LE80578 is not set # CONFIG_FB_MATROX is not set # CONFIG_FB_RADEON is not set @@ -2241,30 +2145,12 @@ # # CONFIG_STAGING is not set CONFIG_X86_PLATFORM_DEVICES=y -# CONFIG_ACER_WMI is not set -# CONFIG_ASUS_LAPTOP is not set -# CONFIG_FUJITSU_LAPTOP is not set -# CONFIG_TC1100_WMI is not set -# CONFIG_MSI_LAPTOP is not set -# CONFIG_PANASONIC_LAPTOP is not set -# CONFIG_COMPAL_LAPTOP is not set -# CONFIG_SONY_LAPTOP is not set -# CONFIG_THINKPAD_ACPI is not set -# CONFIG_INTEL_MENLOW is not set -CONFIG_EEEPC_LAPTOP=y -# CONFIG_ACPI_WMI is not set -# CONFIG_ACPI_ASUS is not set -# CONFIG_TOPSTAR_LAPTOP is not set -# CONFIG_ACPI_TOSHIBA is not set -# CONFIG_TOSHIBA_BT_RFKILL is not set -# CONFIG_ACPI_CMPC is not set # # Firmware Drivers # # CONFIG_EDD is not set CONFIG_FIRMWARE_MEMMAP=y -CONFIG_EFI_VARS=y # CONFIG_DELL_RBU is not set # CONFIG_DCDBAS is not set CONFIG_DMIID=y @@ -2604,7 +2490,6 @@ # CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX is not set # CONFIG_SECURITY_SMACK is not set # CONFIG_SECURITY_TOMOYO is not set -# CONFIG_IMA is not set CONFIG_DEFAULT_SECURITY_SELINUX=y # CONFIG_DEFAULT_SECURITY_SMACK is not set # CONFIG_DEFAULT_SECURITY_TOMOYO is not set Saturday, August 11, 2012 Man-in-the-Middle Port Protection I'm wondering if this idea could work as a generic solution to the "OMG, the printer's been hacked' problem". There are a very large classes of important legacy IP devices that can be easily compromised and either used as "zombies" in a bot-net, as relay devices or like Infrastructure controllers, be high-value targets for disruptive hackers. What they share in common is: • their software can't be fixed or upgraded to current "hardness" levels and/or support current security protocols, like 802.1x, and • full replacement, or "fork-lift upgrade", is deemed unwarranted or infeasible. The "Man-in-the Middle" (MitM) single ethernet port protection solution comes in two parts: • a dongle or wall-wart to connect in front of the printer, perhaps with Power-over-Ethernet (PoE) and • a central server/filter/firewall that the dongle(s) connect back to. The dongle needs two ethernet connections and the software has to learn two things: • up-stream/down-stream traffic flows, which side is the printer, which is the network, and • the IP number + MAC address of the device. It's job is to transparently take over the IP number and MAC address of the printer. First it must gain itself a second IP number for its work and authenticate itself to the network if 802.1x is in use. Having done that, it establishes a secure tunnel, via SSL or SSH, back to the central security device where the real security measures are taken. The central security device can be implemented as a cloud of local devices, centrally managed. The device can be mimiced either locally or centrally, depending on network configuration, latency and traffic volume concerns and devices available. Note that the protected device is behind a dongle, it will never be seen 'bare' on the network again, so conflicts will not arise. To the printer or device-under-control, no change in the network or environment will be discernible. To all other devices on the network, the printer or device-under-control will appear to have had a firmware upgrade, but otherwise be identical. First option is for all network traffic destined for the printer has to be shunted back to the central secure device, modulo trivial ipfilter rules. This includes preventing any unauthorised outbound traffic, or even directing all outbound packets for analysis through the central security device for Intrusion Detection analysis. Once the traffic is back at the central secure device, it can be properly inspected and cleaned, then turned around on the same SSL/SSH tunnel. Second option, the central secure device assumes both IP number and MAC address of the device-under-control by advertising it's IP and the same MAC addrs. It can also provide 802.1x-client facilities. The central secure device then forwards only valid traffic to the dongle at the printer over SSL/SSH, and response traffic is tunnelled back and inspected over the same path. The difference is where the impersonated IP number/MAC address now appears in the network: • either exactly where it always has been, or • in a central location. A 2-port version of the Raspberry Pi would be able to do this, plus some tinkering in a firewall appliance. If very short UTP leads are used on the MitM dongle, it will be very difficult to remove from the printer or device under control. The Use Case is simple: • get new secure server • install 802.1x certificates for printers/devices-under-control in the secure server • install 802.1x certs in dongles, setup in DHCP, don't have to be static IP numbers. • egister SSH or SSL keys of dongle to secure server • check locally the dongle works and correctly controls a test device • go to printer, install dongle, check it works. Requires normal traffic and a scan for vulnerabilities. • there might be a some outage as the double shuffle happens, the MAC address may now appears elsewhere in the network • not sure how real-time swap might affects existing IP connections. If the disruption in traffic flow is under the TCP disconnect window, it won't be noticed. It's not an option to leave existing connections uninspected, they could Botnet control channel. Do these things already exist? I don't know.. I can't believe something this simple hasn't already been built. There are 2½ variants that might also be interesting. For variant #2, I'm not sure what can already be done in switch software. 1. Plug 2-port MitM dongle directly into switch, capture all packets at the switch, not at the printer. 1A. Plug a multi-port MitM dongle into switch so that it acts like a switch itself, having one up-stream link via the switch, and provides multiple ports for controlled devices to be connected to. Needs the host switch to allow multiple MAC's per port. Problem with remote dongle: Printer/device-under-control can be relocated (physically or connection) and lose protection/filtering. 2. High-end Switches have "Port mirroring" software, can that be used or modified for the MitM packet redirection? • Port-mirroring sends a copies of ingress and egress packets to another port, even on another switch. • The remote Filter/firewall (FF) needs two ports, A & B, one ingress, one egress.. • Network egress traffic is redirected to Port-A of the FF, even on another VLAN. • Network Ingress traffic is instead received from Port-A of the FF. • Port-B traffic of the FF is sent back as egress traffic of the controlled port, and • ingress traffic of the controlled port is sent to Port-B traffic of the FF. • This achieves logically what the physical dongle+physical patch leads achieved, passing all traffic via an MitM Filter/Firewall. 2A. Multi-port filter/firewall. • Can switch software multiplex multiple MAC addresses onto two ports, serving multiple devices under control with the same 2-port hardware, or • does it need a pair of ports on the filter/firewall for each device under control, or • 1 upstream link to capture all redirected traffic, and 1 ethernet port for each device under control. Thursday, June 07, 2012 The New Storage Hierarchy and 21 Century Databases The new hardware capabilities and cost/performance characteristics of storage and computer systems means there has to be a radical rethink of how databases work and are organised. The three main challenges I see are: • SSD and PCI Flash memory with "zero" seek time, • affordable Petabyte HDD storage, and • object-based storage replacing "direct attach" devices. These technical tradeoff changes force these design changes: • single record access time is no longer dominated by disk rotations, old 'optimisations' are large, costly, slow and irrelevant, • the whole "write region" can be held in fast memory changing cache requirements and design, • Petabtye storage allows "never delete" of datasets which pose new problems: • how does old and new data get physically organised? • what logical representations can be used to reduce queries to minimal collections? • how does the one datastore support conflicting use types? [real-time transactions vs data wharehouse] • How are changed Data Dictionaries supported? • common DB formats are necessary as the lifetime of data will cover multiple products and their versions. • Filesystems and Databases have to use the same primitives and use common tools for backups, snapshots and archives. • As do higher order functions/facilities: • compression, de-duplication, transparent provisioning, Access Control and Encryption • Data Durability and Reliability [RAID + geo-replication] • How is security managed over time with unchanging datasets? • How is Performance Analysis and 'Tuning' performed? • Can Petabyte datasets be restored or migrated at all? • DB's must continue running without loss or performance degradation as the underlying storage and compute elements are changed or re-arragned. • How is expired data 'cleaned' whilst respecting/enforcing any legal caveats or injunctions? • What data are new Applications tested against? • Just a subset of "full production"? [doesn't allow Sizing or Performance Testing] • Testing and Developing against "live production" data is extremely unwise [unintended changes/damage] or a massive security hole. But when there's One DB, what to do? • What does DB roll-back and recovery mean now? What actions should be expected? • Is "roll-back" or reversion allowable or supportable in this new world? • Can data really be deleted in a "never delete" dataset? • Is the Accounting notion of "journal entries" necessary? • What happens when logical inconsistencies appear in geo-diverse DB copies? • can they be detected? • can they ever be resolved? • How do these never-delete DB's interface or support corporate Document and Knowledge Management systems? • Should summarises ever be made and stored automatically under the many privacy and legal data-retention laws, regulations and policies around? • How are conflicting multi-jurisdiction issues resolved for datasets with wide geo-coverage? • How are organisation mergers accomplished? • Who owns what data when an organisation is de-merged? • Who is responsible for curating important data when an organisation disbands? XML is not the answer: it is a perfect self-containing data interchange format, but not an internal DB format. Redesign and adaption is needed at three levels: • Logical Data layout, query language and Application interface. • Physical to Logical mapping and supporting DB engines. • Systems Configuration, Operations and Admin. We now live in a world of VM's, transparent migration and continuous uninterrupted operations: DB's have to catch up. They also have to embrace the integration of multiple disparate data sources/streams as laid out in the solution Jerry Gregoire created for Dell in 1999 with his "G2 Strategy": • Everything should be scalable through the addition of servers. • Principle application interface should be a web browser. • Key programming with Java or Active X type languages. • Message brokers used for application interfacing. • Technology selection on an application by application basis . • Databases should be interchangeable. • Extend the life of legacy systems by wrapping them in a new interface. • Utilize "off the shelf systems" where appropriate. • In house development should rely on object based technology - new applications should be made up of proven object puzzle pieces. Data Discovery, Entity Semantics with range/limits (metadata?) and Rapid/Agile Application development are critical issues in this new world. Tuesday, February 14, 2012 charging a Nokia phone (C2-01) from USB. Need enough power. [this piece is a place marker for people searching on "how to" charge from USB. Short answer: Plug in any modern phone and it's supposed to "Just Work".] A couple of weeks ago I bought an unlocked Nokia C2-01 from local retailer Dick Smith's. I wanted bluetooth, 3G capability and got a direct micro-USB (DC-6) connector too. I bought a bluetooth "handsfree" for my car as well. It came with a cigarette-lighter charger with a USB socket and a USB mini (not micro) cable. I remembered that all mobiles sold in the European Union were mandated to use a USB charger [MoU in 2009, mandate later] and thought I'd be able to use the car-charger for everything: phone, camera, ... The supplied 240V external phone charger worked well for the C2-01. But I couldn't get it to charge from the in-car USB charger. Turns out the handset charger could only supply 400ma, not the 500ma of the USB standard. Bought another in-car USB charger from Dick Smith's: works fine with both. What had confused me was the phone wouldn't charge when I tested it with my (old) powered USB hub. Is it old and tired or was the phone already fully charged?? Need to properly test that. I hadn't tried it with my Mac Mini, jumping to the unwarranted conclusion "this phone doesn't do USB charging". When tested, worked OK directly with the Mac... There's one little wrinkle. Devices like the iPad that charge from USB take more than 500ma (700ma?) - which Mac's supply, but are more than the standard. [Why some USB adaptors for portable HDD's have two USB-A connectors.] I know a higher current has been specified for USB - but can't remember the variant. Is it just the new "USB 3" or current "USB 2" as well? Sunday, February 12, 2012 shingled write disks: bad block 'mapping' not A Good Idea Singled-write disks can't update sectors in-place. Plus they are likely to have sectors larger than the current 2KB. [8KB?] The current bad-block strategy of rewriting blocks in another region of the disk is doubly flawed: • extremely high-density disks should be treated as large rewritable Optical Disks. They are great at "seek and stream", but have exceedingly poor GB/access/sec ratios. Forcing the heads to move whilst streaming data affects performance radically and should be avoided to achieve consistent/predictable good performance. • Just where and how should the spare blocks be allocated? Not the usual "end of the disk", which forces worst case seeks. "In the middle" forces long seeks, which is better, but not ideal. "Close by", i.e. for every few shingled-write bands or regions, include spare blank tracks (remembering they are 5+ shingled-tracks wide). My best strategy, in-place bad-block identification and avoidance, is two-fold: • It assumes large shingled-write bands/regions: 1-4GB. • Use a 4-16GB Flash memory as a full-region buffer, and perform continuous shingled-writes in a band/region. This allows the use of CD-ROM style Reed-Solomon product codes to cater for/correct long burst errors at low overhead. • After write, reread the shingle-write band/region and look for errors or "problematic" recording (low read signal), then re-record. The new write stream can put "synch patterns" in the not to be used areas, the heads spaced over problematic tracks or the track-width widened for the whole or part of the band/region. This moves the cost of bad-blocks from read-time to write-time. It potentially slows the sequential write speed of the drive, but are you writing to a "no update-in-place" device for speed? No. You presumably also want the best chance possible of retrieving the data later on. Should the strategy be tunable for the application? I'm not sure. Firmware size and complexity must be minimal for high-reliability and low defect-rates. Only essential features can be included to achieve this aim... Monday, February 06, 2012 modern HDD's: No more 'cylinders' Another rule busted into a myth. How does this affect File Systems, like the original Berkeley Fast File System? There's a really interesting piece of detail in a 2010 paper on "Shingled Writes". Cylinder organisations are no longer advantageous. It's faster to keep writing on the one surface than to switch heads. With very small feature/track sizes, the time taken for a head-switch is large. The new head isn't automatically 'on track', it has to find the track... "settling time". Bands consist of contiguous tracks on the same surface. At first glance, it seems attractive to incorporate parallel tracks on all surfaces (i.e., cylinders) into bands. However, a switch to another track in the same cylinder takes longer than a seek to an adjacent track: thermal differences within the disk casing may prevent different heads from hovering over the same track in a cylinder. To switch surfaces, the servo mechanism must first wait for several blocks of servo information to pass by to ascertain its position before it can start moving the head to its desired position exactly over the desired track. In contrast, a seek to an adjacent track starts out from a known position. In both cases, there is a settling time to ensure that the head remains solidly over the track and is not oscillating over it. Because of this difference in switching times, and contrary to traditional wisdom regarding colocation within a cylinder, bands are better constructed from contiguous tracks on the same surface. Saturday, February 04, 2012 Intra-disk Error Correction: RAID-4 in shingled-write drives High density shingled-write drives cannot succeed without especial attention being paid to Error Correction, not just error detection. Sony/Philips realised this when developing the Compact Digital Audio Disk (CD) around 1980 and then again in 1985 with the "Yellow Book" CD-ROM standard for data-on-CD. The intrinsic bit error rate of ~ 1 in 105 becomes "infinitesimal" to quote one tutorial, with burst errors of ~4,000 bits corrected by the two lower layers. Error rates and sensitivity to defects increase considerably as feature sizes reach their limit. The 256Kbit DRAM chips took years to come into production after 64Kbit chips because manufacturing yields were low. Almost every chip worked well enough, but had some defects causing it to be failed in testing. The solution was to overbuild the chips and swap defective columns with spares during testing. Shingled-write disks, with their "replace whole region, never update-in-place", allow for a different class of Error Protection. RAID techniques with fixed parity disks seem a suitable candidate when individual sectors are never updated. Network Appliance very successfully leveraged this with their WAFL file system. That shingled-write disks require good Error Correction should be without dispute. What type of ECC (Error Correcting Code) to choose is an engineering problem based on the expected types of errors and the level of Data Protection required. I've previously written that for backup and archival purposes, the probable main uses of shingled-write disks, bit error rates of 1 in 1060 should be a minimum. One of the advantages of shingled-write disks, is that each shingled-write region can be laid down in one go from a Flash memory buffer. It can then be re-read and rewritten catering for the disk characteristics found: • excessive track cross-talk, • writes affected by excessive head movement (external vibration), • individual media defects or moving contamination, • areas of poor media, and • low signal or high signal-to-noise ratio due to age, wear or production variations. Depending on the application, multiple rewrites may be attempted. It would even be possible, given spare write-regions, for drives to periodically read and rewrite all data to the new areas. This is fraught because the extra "duty cycle" will decrease drive life plus if the drive finds uncorrectable errors when the attached host(s) weren't addressing it, what should be done? Reed-Solomon encoding is well proven in Optical Disks: CD, CD-ROM and DVD and probably in-use now for 2Kb sector disks. Reed-Solomon codes can be "tuned" to the application, the amount of parity overhead can be varied and other techniques like scrambling and combined in Product Codes. R-S codes have a downside: complexity of encoders and decoders. [This can mean speed and throughput as well. Some decoding algorithmns require multiple passes to correct all errors.] For a single platter shingled-write drive, Error Correcting codes (e.g. Reed-Solomon) are the only option to address long burst errors caused by recording drop-outs. For multi-platter shingled-write disks, another option is possible: RAID-4, or block-wise parity (XOR) on a dedicated drive (in this case, 'surface'). 2.5 in drives can have 2 or 3 platters, i.e. 4 or 6 surfaces. Dedicating one surface to parity gives 25% and 16.7% overhead respectively, higher than the ~12.5% Reed-Solomon overhead in the top layer of CD-ROM's. With 4 platters, or 8 surfaces, overhead is 12.5%, matching that of CD-ROM, layer 3. XOR parity generation and checking is fast, efficient and well understood, this is it's attraction. But despite a large overhead, it: • can at best only correct a single sector in error, fails on two dead sectors in the sector set, • relies on the underlying layer to flag drop-outs/erasures, and • relies on the CRC check to be perfect. If the raw bit error rate is 1 in 1014 with 2Kb sectors. The probability of any sector having an uncorrected error is 6.25 x 10 -9. The probability of two sectors in a set being in error is: 6.25 x 10 -9 * 6.25 x 10 -9 = 4 x 10 -17 This is well below what CD-ROM achieves. But, to give intra-disk RAID-4 its due: • corrects a burst error of 16,000 bits. Four times the CD limit. • will correct every fourth sector on each surface • is deterministic in speed. Reed-Solomon decoding algorithms can require multiple passes to fully correct all data. I'm thinking the two schemes could be used together and would complement each other. Just how, not yet sure. A start would be to group together 5-6 sectors with a shared ECC in an attempt to limit the number of ganged failed sector reads in a RAID'd sector set. Multiple arms/actuators and shingled-write drives Previously, I've written on last-gen (Z-gen) shingled-write drives and mentioned multiple independent arms/actuators: • separating "heavy" heads (write + heater) from lightweight read-only, and • using dual sets of heads, either read-only or read-write. There is some definitive work by Dr. Sudhanva Gurumurthi of University of Virginia and his students on using multiple arms/actuators in current drives, especially those with Variable Angular Velocity, not Constant Angular Velocity, maybe nearer Constant Linear Velocity - approaches used in Optical drives. E.g. "Intra-disk Parallelism" thesis and "Energy-Efficient Storage Systems" page. The physics are good and calculations impressive, but where is the commercial take-up? Extra arms/actuators and drive/head electronics are expensive, plus need mounting area on the case. What's the "value proposition" for the customer? Both manufacturers and consumers have to be convinced it's a worthwhile idea and there is some real value. Possibly the problem is two-fold: • extra heads don't increase capacity, only reduce seek time (needed for "green" drives), a hard sell. • would customers prefer two sets of heads mounted in two drives with double the capacity, the flexibility to mirror data and replace individual units. Adopting dual-heads in shingled-write drives might be attractive: • shingled-write holds the potential to double or more the track density with the same technology/parts. [Similar to the 50% increase early RLL controllers gave over MFM drives.] Improving drive$$/GB is at least an incentive to produce and buy them. • We've no idea how sensitive to vibration drives with these very small bit-cells will be. Having symmetric head movement will cancel most vibration harmonics, helping settling inside the drive and reducing impact externally. To appreciate the need Ed Grochowski, in 2011 compared DRAM, Flash and HDD, calculating bitcell sizes for a 3.5 in disk and 750Gb platter used in 3TB drives (max 5 platters in a 25.4 mm thick drive). The head lithography is 37nm, tracks are 74nm wide and now with perpendicular recording, 13nm long. The outside track is 87.5mm diameter, or 275 mm in length, hold a potential 21M bitcells, yielding 2-2.5MB of usable data. With 2KB sectors, ~1,000 sectors/track maximum. The inner track is 25.5mm diameter, 80mm in length: 29% of the outside track length. The 31mm wide write-area contains up to 418,000 tracks with a total length of ~5 km. Modern drives group tracks in "zones" and vary rotational velocity. The number of zones and how closely they approximate "Constant Linear Velocity", like early CD drives, isn't discussed in vendors data sheets. While Grocowski doesn't mention clocking or sector overheads (headers, sync bits, CRC/ECC) and inter-sector gaps. Working backwards from the total track length, the track 'pitch' is around 150nm, leaving a gap of roughly a full track width between tracks. I've not seen mentioned bearing runout and wobble that require heads to constantly adjust tracking, a major issue with Optical disks. Control of peripheral disk dimensions and ensuing problems is discussed. As tracks become thinner, seeking to, and staying on, a given track becomes increasingly difficult. These are extremely small targets to find on a disk and tracking requires very fine control needing both very precise electronics and high-precision mechanical components in the arms and actuators. Dr Gurumurthi notes in "HDD basics" that this "settling-time" becomes more important with smaller disks and higher track density. This, as well as thinner tracks, is the space that shingled-writing is looking to exploit. The track width and pitch become the same, around 35nm for a 4-fold increase in track density using current heads, less inter-region gaps and other overheads. Introducing counter-balancing dual heads/actuators may be necessary to successfully track the very small features of shingled-write disks. A 2-4 times capacity gain would justify the extra cost/complexity for manufacturers and customers. Wednesday, February 01, 2012 Z-gen hard disks: shingled writes We are approaching the limits of magnetic hard disk drives, probably before 2020, with 4-8TB per 2.5 inch platter. One of the new key technologies proposed is "shingled writes", where new tracks partially overwrite an adjacent, previously written track, making the effective track-width of the write heads much smaller. Across the disk, multiple inter-track blank (guard) areas are needed to allow the shingling to start and finish, creating "write regions" (super-tracks?) as the smallest recordable area, instead of single tracks. The cost of discarding the guard distance between tracks is higher cross-talk, requiring more aggressive low-level Error Correction and Detection schemes. In the worst case, a single sector update, the drive has to read the whole write-region into local memory, update the sector and then rewrite the whole write-region. These multi-track writes, with one disk revolution per track, are not only slow and make the drive unavailable for the duration, but require additional internal resources to perform, including memory to store a whole region. Current drive buffers would limit the size of regions to a few 10's of MB, which may not yield a worthwhile capacity improvement. The shingled-write technique has severe limitations for random "update-in-place" usage: • write-regions either have to be very small with many inter-region gaps/guard areas, considerably reducing the areal recording density and obviating its benefits, or • have relatively few very large write-regions to achieve 90+% theoretical maximum areal recording density at the expense of update times in the order of 10-100 seconds and significant on-drive memory. This substantially increases cost if SRAM is used and complexity if DRAM is used. Clearly, shingled writes are not an optimum solution for drives used for random writes and update-in-place, they are perhaps the worse solution for this sort of work load. A tempting solution is to adopt a "log structured" approach, such as used in Flash Memory in the SSD FTL (Flash Translation Layer), and map logical sectors to physical location: write sector updates to a log-file, don't do updates-in-place and securely maintain a logical-to-physical sector map. Contiguous logical sectors are initially written physically adjacent, but over time as sectors are updated multiple times, contiguous logical sectors will be spread widely across the disk. Thereby radically slowing streaming read rates, leaving many "dead" sectors reducing the effective capacity and requiring active consolidation, or "compaction". The drive controllers still have to perform logical-to-physical sector mapping, optimally order reads and reassemble the contiguous logical stream. Methods to ameliorate disruption of spatial proximity must trade space for speed: • either allow low-density (non-shingled) sets of tracks in the inter-region gaps specifically for sector updates, or • leave an update area at the end of every write-region. Larger update areas lower effective capacity/areal density, whilst smaller areas are saturated more quickly. Both these update-expansion area approaches have a "capacity wall", or an inherent hard-limit: what to do when the update-area is exhausted? Pre-emptive strategies, such as predictive compaction, must be scheduled for low activity times to minimise performance impact, requiring the drive to have significant resources, a good time and date source and to second-guess its load-generators. Embedding additional complex software in millions of disk drives that need to achieve better than "six nines" reliability creates an administration, security and data-loss liability nightmare for consumers and vendors alike. The potential for "unanticipated interactions" is high. The most probable are defeating Operating System block-driver attempts to optimally organise disk I/O requests and duplicating housekeeping functions like compaction and block relocation, resulting in write avalanches and infinite cascading updates triggered when drives near full capacity. In RAID applications, the variable and unpredictable I/O response time would trigger many false parity recovery operations, offsetting the higher capacity gained with significant performance penalties. In summary, trying to hide the recording structure from the Operating System, with its global view and deep resources, will be counter-effective for update-in-place use. Shingled-write drives are not suitable for high-intensity update-in-place uses such as databases. There are workloads that are a very good match to large-region update whole-disk structures: • write-once or very low change-rate data, such as video/audio files or Operating System libraries etc, • log files, when preallocated and written in append-only mode, • distributed/shared permanent data, such as Google's compressed web pages and index files, • read-only snapshots, • backups, • archives, and • hybrid systems designed for the structure, using techniques like Overlay mounts with updates written to more volatile-friendly media such as speed-optimised disks or Flash Memory. Shingled-write drives with non-updateable, large write-regions are a perfect match for an increasingly important HDD application area: "Seek and Stream". There are already multiple classes of disk drives: • cost-optimised drives, • robust drives for mobile application, • capacity-optimised Enterprise drives, • speed-optimised Enterprise drives, and • "green" or power-minimised variants of each class. Pure shingled-write drives could be considered a new class of drive: • capacity-optimised, write whole-region, never update. Not unlike CD-RW or DVD-RAM. With Bit-Patterned-Media (BPM), another key technology needed for Z-gen drives, a further refinement is possible for write whole-region drives: • continuous spiral tracks per write-region, as used by Optical drives. Lastly, an on-drive write-buffer, of Flash memory, of 1 or 2 write-regions size would, I suspect, improve drive performance significantly and allow additional optimisations or Forward Error Correction in the recording electronics/algorithms. For a Z-gen drive with 4TB/platter and 4Gbps raw bit-rates, 2-8Gb write-regions may be close to optimal. Around 1,000 regions per drive would also fit nicely with CLV (Constant Linear Velocity) and power-reducing slow-spin techniques. A refinement would be to allow variable size regions to precisely match the size of data written, in much the same way that 1/2 inch tape drives wrote variable sized blocks. This technique allows the Operating System to avoid wasted space or complex aggregation needed to match file and disk recording-unit sizes. This is not quite the "Count Key Data" organisation of old mainframe drives (described by Patterson et al in 1988 as "Single Large Expensive Drives"). Like Optical disks, particularly the CD-ROM "Mode 1" layer, additional Forward Error Correction can be cheaply built into the region data to achieve protection from burst-errors and achieve unrecoverable bit-error rates in excess of 1 in 1060 both on-disk and for off-disk transfers. For 100-year archival data to be stored on disks, it has to be moved and recreated every 5-7 years, forcing errors to be crystallised each time. Petabyte RAID'd collections using drives with 1 in 1016 bits-in-error only achieve a 99.5% probability of successful rebuild with RAID-6. Data migrations are effective RAID rebuilds. Twenty consecutive rebuilds have a 10% probability of complete data loss, an unacceptably high rate. Duplicating the systems reduces this to a 1% probability of complete data loss, but at a 100% overhead. The modern Error Correction techniques suggested here require modest (10-20%) overheads and would improve data protection to less than 0.1% data loss, though not obviating the problems of failing hardware. In a world of automatic data de-duplication and on-disk compression, data protection/preservation becomes a very high priority. Storage efficiency brings the cost of "single point of failure". This adds another impetus to add good Error Correction to write-regions. A 4GB region, written at 4Gbps would stream in 8-10 seconds. Buffering an unwritten region in local Flash Memory would allow fast access to the data both before and during the commit to disk operation, given sufficient excess Flash read bandwidth. Note, this 4-8Gbps bandwidth, is an important constraint on the Flash Memory organisation. Unlike SSD's, because of the direct access and sequential-write, no FTL is required, but bad-block and worn-block management are still necessary. 4GB, approximately the size of a DVD, is known to be a useful and manageable size with a well understood file system (ISO 9660) available, it would match shingled-disk write-regions well. Working with these region-sizes and file system is building on well-known, tested and understood capabilities, allowing rapid development and a safe transition. Using low-cost MLC Flash Memory with a life perhaps of 10,000 erase cycles would allow the whole platter (1,000 regions) to be rewritten at least 10 times. Allowing a 25% over-provisioning of Flash may improve the lifetime appreciably as is done with SSD's, which could be a "point of difference" for different drive variants. Specifically, the cache suggested is write-only, not a read-cache. The drive usage is intended to be "Seek and Stream", which does not benefit from an on-drive read-cache. For servers and disk-appliances, 4GB of DRAM cache is now an insignificant cost and the optimal location for a read cache. Provisioning enough Flash Memory for multiple uncommitted regions, even 2 or 3, may also be a useful "point of difference" for either Enterprise or Consumer applications. Until this drive organisation is simulated and Operating and File Systems are written and trialled/tested against them, real-world requirements and advantages of larger cache sizes are uncertain. Depending on the head configuration, the data could be read-back after write as a whole region and re-recorded as necessary. The recording electronics perhaps adjusting for detected media defects and optimising recording parameters for the individual surface-head characteristics in the region. If regions are only written at the unused end of drives, such as for archives or digital libraries, maximum effective drive capacity is guaranteed, there is no lost space. Write-once, Read-Many is an increasingly common and important application for drives. A side-effect is that individual drives will, like 1/2 inch tapes of old, vary in achieved capacity, though of the same notional size, but you can't know until the limit is reached. Operating and File Systems have dealt with "bad blocks" for many decades and can potentially use that approach to cope with variable drive capacity, though it is not a perfect match. Artificially limiting drive capacity to the "Least Common Denominator", either by the consumer or vendor, is also likely. "Over-clocking" of CPU's shows that some consumers will push the envelope and attempt to subvert/overcome any arbitrary hardware limits imposed. If any popular Operating System can't cope easily with uncertain drive capacity and variable regions, this will limit their uptake in that market, though experience suggests not for long if their is an appreciable price or capacity/performance differential. When re-writing a drive, the most cautious approach is to first logically erase the whole drive and then start recording again, overwriting everything. The most optimistic approach is to logically erase 2 or 3 regions, the region you'd like to write and enough of a physical cushion to allow defects etc not to cause an unintended region overwrite. This suggests two additional drive commands are needed: • write region without overwriting next region (or named region) • query notional region size available from "current position" to next region or end-of-disk. This raises an implementation detail beyond me: Are explicit "erase region" or "free region" operations required? Would they physically write to every raw bit-location in a region or not? On Heat Assisted Magnet Recording (HAMR), drive vibration, variable speed and multiple heads/arms. HAMR is the other key technologies (along with BPM and shingled-writes) being explored/researched to achieve Z-gen capacities. It requires the heating of the media, presumably over the Curie Point, to erase the existing magnet fields. Without specific knowledge, I'm guessing those heads will be bigger and heavier than current heads, and considerably larger and heavier than the read heads needed. Large write-regions with wide guard areas between, would seem to be very well suited to HAMR and its implied low-precision heating element(s). Relieving the heating elements of the same precision requirements as the write and read heads may make the system easier to construct and control and hence record more reliably. Though this is pure conjecture on my part. Dr. Sudhanva Gurumurthi and his students have extensively researched and written about the impact of drive rotational velocity, power-use and multiple heads. From the timing of their publications and the release of slow-spin and variable-speed drives, its reasonable to infer that Gurumurthi's work was taken up by the HDD manufacturers. Being used for "Seek and Stream", not for high-intensity Random I/O, HDD's will exhibit considerably less head/actuator movement resulting in much less generated vibration if nothing else changes. This improves operational reliability and lowers induces errors greatly by removing most of the drive-generated vibration. At the very least, less dampening will be needed in high-density storage arrays. Implied in the "Seek and Stream" is that I/O characteristics will be different, either: • nearly 100% write for an archive or logging drive with zero long seeks, or • nearly 100% read for a digital library or distributed data with moderate long seeking. In both scenarios, seeks would reduce from the current 250-500/sec to 0.1-10/sec. For continuous spiral tracks, head movement is continuous and smooth for the duration of the streaming read/write, removing entirely the sudden impulses of track seeks. For regions of discrete, concentric tracks, the head movements contain the minimum impulse energy. Good both for power-use and induced vibration. Drawing on Gurumurthi's work on multiple heads compensating for performance of slow-spin drives, this head/actuator arrangement for HAMR with shingled-write may be benefical: • Separate "heavy head", either heating element, write-head or combined heater-write head. • Dual light-weight read heads, mounted diagonally from each other and at right-angles to the "heavy head". Because write operations are infrequent in both scenarios, the "heavy head" will be normally unloaded, even leaving the heating elements (if lasers aren't used) normally off. The 2-10 second region-write time, possibly with 1 or 2 rewrite attempts, means 10 msec heater ramp-up would not materially affect performance. A single write head can only achieve half the maximum raw transfer rate of dual read heads. Operating Systems have not had to deal with this sort of asymmetry before and it could flush out bugs due to false assumptions. Separating the read and "heavy" heads reduces the arm/bearing engineering requirements and actuator power for the usual dominant case - reading. By slamming around lighter loads, lower impulses are produced. Because the drives are not attempting high-intensity random I/O, lower seek performance is acceptable. The energy used in accelerating/de-accelerating any mass is proportional to velocity2. Reducing the arm seek velocity by 70% halves the energy needed and the impulse energy needing to be dissipated. (Lower g-forces also reduce the amplitude of the impulse, though I can't remember the relationship.) With "Seek and Stream" mode of operation, for a well tuned/balanced system, the dominant time factor is "streaming". The raw I/O transfer rate is of primary concern. The seek rate, especially for read, can be scaled back with little loss in aggregate throughput. Optimising these factors is beyond my knowledge. By using dual opposing read heads, impulses can be further reduced by synchronising the major seek movements of the read heads/arms. As well, both heads can read the same region simultaneously, doubling the read throughput. This could be as simple as having each head read alternate tracks, or in a spiral track, the second head starts halfway through the read area, though to simply achieve maximum bandwidth, the requesting initiator may have to be able to cope with two parallel streams, then join the fragments in its buffer. Not ideal, but attainable. Assuming shingled-writes, dual spiral tracks would allow simple interleaving of simultaneous read streams, but would either need two write heads similarly diagonally opposed or a single device with two heads offset by a track width and possibly staggered in the direction of travel to be assembled. Would the a single laser heating element suffice two write heads?? This arrangement sounds overly complicated, difficult to consistently manufacture to high-precision and expensive. For a single spiral track with dual read-heads, a dual spiral can be simulated, though achieving full throughput requires more local buffer space. The controller moves the heads to adjacent tracks and reads a full-track from each into a first set of buffers, it then concatenates the buffers and streams the data stream. After the first track, the heads are leap-frogged and stream to an alternate set of buffers, which are then concatenated and streamed while the heads are leap-frogged and switch back to the first set of buffers, etc. This scheme doesn't need to buffer an exact track, but something larger than the longest track at a small loss of speed. If a 1MB "track" size is chosen, then 4MB of buffer space is required. Data can begin being streamed from the first byte of track 0, though only after both buffers are full can full-speed transfers happen. It's possible to de-interleave the data when written and reorder before writing with alternate sectors offset by half the now write buffer size (2MB for a 4MB buffer). On reading, directly after the initial seek to the same 4MB segment but offsets zero and 2MB, heads will read alternate sectors which can then be interleaved easily and output at full bandwidth. When a head reaches the end of a segment (4MB), they jump to the next segment and start streaming again. Some buffering will be required because of the variable track size and geometric head offsets. I'm not sure if either scheme is superior. Summary: • shingled-write drives form a new class of "write whole-region, never update" capacity-optimised drives. As such, they are NOT "drop-in replacements" for current HDD's, but require some tailoring of Operating and File Systems. • abandon the notional single-sector organisation for multi-sector variable blocking similar to old 1/2 tape. • large write-regions (2-8GB) of variable size with small inter-region gaps maximise achievable drive capacity and minimise file system lost-space due to disk and file system size mismatches. If regions are fixed-sector organised, lost space will average around a half-sector, under 1/1000th overhead. • Appending regions to disks is the optimal recording method. • Optimisation techniques used in Optical Drives, such as continuous spiral tracks and CD-ROM's high resilience Error Correction, can be applied to fixed-sectors and whole shingled-write regions. • Integral high-bandwidth Flash Memory write caches would allow optimal region recording at low cost, including read-back and location optimised re-recording. • Shingled-writes would benefit from purpose-designed BPM media, but could be usefully implemented with current technologies to achieve higher capacities, though perhaps exposing individual drive variability. • shingled-writes and large, "never updated" regions work well with HAMR, BPM, separated read/write heads and dual light-weight read heads. Sunday, January 08, 2012 Revolutions End II and The Memory Wall The 2011 ITRS report for the first time uses the terms, "ultimate Silicon scaling" and "Beyond CMOS". The definitive industry report is highlighting for us that the end of the Silicon Revolution is in sight, but that won't be the end of the whole story. Engineers are very clever people and will find ways to keep the electronics revolution moving along, albeit at a much gentler pace. In 2001, the ITRS report noted that CPU's would be hitting a Power Wall, they'd need to forgoe performance (frequency) to fit within a constrained power envelope. Within 2 years, Intel was shipping multi-core CPU's. Herb Sutter wrote about this in "The Free Lunch is Over". In the coming 2011 ITRS report, they write explicitly about "Solving the Memory Wall". Since 1987 and the Pentium IV, CPU clock frequency (also 'cycle time') has been increasing faster than DRAM cycle times: by roughly 40% per year. (7%/year for DRAM and ~50%/year for CPU chip freq.) This is neatly solved, by trading latency for bandwidth, with caches. The total memory bandwidth needs for multi-core CPU's doesn't just scale with the chip frequency (5%/year growth), but with the total number of cores accessing the cache (number of cores grow at approx 40%/year). Cache optimisation, the maximisation of cache "hit ratio", requires the largest cache possible. Hence Intel now has 3 levels of cache, with the "last level cache" being shared globally (amongst all cores). The upshot of this is simple: to maintain good cache hit-ratios, cache size has to scale with the total demand for memory access. i.e. N-cores * chip freq. To avoid excessive processor 'stall', waiting for the cache to be filled from RAM, the hit-ratio has to increase as the speed differential increases. An increased chip frequency requires a faster average memory access time. So the scaling of cache size is: ( N-cores ) * (chip freq)² The upshot is: cache memory has grown to dominate CPU chip layout and will only increase. But it's a little worse than that... The capacity growth of DRAM has slowed to a doubling every 3-4 years. In 2005, the ITRS report for the first time dropped DRAM as its "reference technology node", replacing it with Flash memory and CPU's. DRAM capacity growth is falling behind total CPU chip memory demands. Amdahl posited another law for "Balanced Systems": that each MIP required 1MB of memory. Another complicating factor is bandwidth limitations for "off-chip" transfers - including memory. This is called "the Pin Bottleneck" (because external connections are notionally by 'pins' on the chip packaging). I haven't chased down the growth pattern of off-chip pins. The 2011 ITRS design report discusses it, along with the Memory Wall, as a limiting factor and a challenge to be solved. As CPU memory demand, the modern version of "MIPS", increases, system memory sizes must similarly scale or the system becomes memory limited. Which isn't a show-stopper in itself, because we invented Virtual Memory (VM) quite some time back to "impedance match" application memory demands for with available physical memory. The next performance roadblock is VM system performance, or VM paging rates. VM systems have typically used Hard Disk (HDD) as their "backing store", but whilst the capacity has grown faster than any other system component (doubling every year since ~1990), latency, seek and transfer times have risen comparatively slowly. Falling, relatively, behind CPU cycle times and memory demands by 50%/year (??). For systems using HDD as their VM backing store, throughput will be adversely affected, even constrained, by the increasing RAM deficit. There is one bright point in all this, Flash Memory has been doubling in capacity as fast as CPU memory demand, and increasing in both speed (latency) and bandwidth. So much so, that there are credible projects to create VM systems tailored to Flash. So our commodity CPU's are evolving to look very similar to David Patterson's iRAM (Intelligent RAM) - a single chip with RAM and processing cores. Just how the chip manufacturers respond is the "$64-billion question".
Perhaps we should be reconsidering Herb Sutters' thesis:
Programmers have to embrace parallel programming and learn to create large, reliable systems with it to exploit future hardware evolution.
|
# Differences
This shows you the differences between two versions of the page.
— preprints:2008:ttp08-18 [2016/03/17 11:03] (current) Line 1: Line 1: + ====== TTP08-18 High-Precision Charm-Quark Mass from Current-Current Correlators in Lattice and Continuum QCD ====== + + + <hidden TTP08-18 High-Precision Charm-Quark Mass from Current-Current Correlators in Lattice and Continuum QCD > We use lattice QCD simulations, with MILC configurations and HISQ $c$-quark propagators, to make very precise determinations of moments of charm-quark pseudoscalar, vector and axial-vector correlators. These moments are combined with new four-loop results from continuum perturbation theory to obtain several new determinations of the $\msb$ mass of the charm quark. We find $m_c(3 \mathrm{GeV})=0.984 (16)$ GeV, or, equivalently, $m_c(m_c)=1.266 (14)$ GeV. This agrees well with results from continuum analyses of the vector correlator using experimental data for $e^+e^-$ annihilation (instead of using lattice QCD simulations). These lattice and continuum results are the most accurate determinations to date of this mass. We also obtain a new result for the QCD coupling: $\alpha_\msb^{(n_f=4)}(3 \mathrm{GeV}) = 0.230 (18)$, or, equivalently, $\alpha_\msb^{(n_f=5)}(M_Z) = 0.113 (4)$. + + |**K.G. Chetyrkin, J.H. Kuehn, M. Steinhauser, C. Sturm and the HPQCD Collaboration** | + |** Phys.Rev. D78 054513 2008 ** | + | {{preprints:2008:ttp08-18.pdf|PDF}} {{preprints:2008:ttp08-18.ps|PostScript}} [[http://arxiv.org/abs/0805.2999|arXiv]] | + | |
|
In fact, we will be using one of the past Kaggle competition data for this autoencoder deep learning project. Implementing a simple linear autoencoder on the MNIST digit dataset using PyTorch. We have talked about your project before, and its still good by me! Wow, above an beyond on this homework, very good job! A standard autoencoder consists of an encoder and a decoder. Denoising Autoencoders (dAE) My one comment would be that your use of only 2 filters in many of your CNNs is exceptionally small. Basically described in all DL textbooks, happy to send the references. In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder … A Short Recap of Standard (Classical) Autoencoders. Please use a supported browser. The Conv layer perform denoising well and extract features that capture useful structure in the distribution of the input.More filters mean more number of features that the model can extract.This feature learn helps to generate the better reconstruction of image. #Lets find out validation performance as we go! Imports. Below is an implementation of an autoencoder written in PyTorch. model -- the PyTorch model / "Module" to train, loss_func -- the loss function that takes in batch in two arguments, the model outputs and the labels, and returns a score. I wish to build a Denoising autoencoder I just use a small definition from another PyTorch thread to add noise in the MNIST dataset. Let's build a simple autoencoder for MNIST in PyTorch where both encoder and decoder are made of one linear layer. An autoencoder is a neural network used for dimensionality reduction; that is, for feature selection and extraction. −dilation[0]×(kernel_size[0]−1)−1}{stride[0]} + 1 However, there still seems to be a few issues. Now that you understand the intuition behind the approach and math, let’s code up the VAE in PyTorch. 2) Create noise mask: do(torch.ones(img.shape)). The Denoising CNN Auto encoders take advantage of some spatial correlation.The Denoising CNN Auto encoders keep the spatial information of the input image data as they are, and extract information gently in what is called the Convolution layer.This process is able to retain the spatial relationships in the data this spatial corelation learned by the model and create better reconstruction utilizing the spatiality. If nothing happens, download Xcode and try again. The four most common uses of an autoencoder are 1.) Variational Autoencoder Code and Experiments 17 minute read This is the fourth and final post in my series: From KL Divergence to Variational Autoencoder in PyTorch.The previous post in the series is Variational Autoencoder Theory. Let’s start by building a deep autoencoder using the Fashion MNIST dataset. The Linear autoencoder consists of only linear layers. Denoising CNN Auto Encoder is better than the large Denoising Auto Encoder from the lecture. Denoising autoencoders are an extension of the basic autoencoder, and represent a stochastic version of it. For 5 the models reconstructed as per the input. So the next step here is to transfer to a Variational AutoEncoder. Background. Denoising of data, e.g. In this blog post, we created a denoising / noise removal autoencoder with Keras, specifically focused on signal processing. Used Google's Colaboratory with GPU enabled. If nothing happens, download GitHub Desktop and try again. The aim of … Show transcript Advance your knowledge in tech . Denoising overcomplete AEs: recreate images without the random noises originally injected. Start Learning for FREE. converting categorical data to numeric data. Goal is not to just learn to reconstruct inputs from themsleves. Other objectives might be feature extraction at the code layer, repurposing the pretrained the encoder/decoder for some other task, denoising, etc. And we will not be using MNIST, Fashion MNIST, or the CIFAR10 dataset. Hopefully the recent lecture clarified when / where to use a Tranposed convolution. 3) Tell me your initial project idea & if you are going to have a partner who the partner is. The Denoising CNN Auto encoders take advantage of some spatial correlation.The Denoising CNN Auto encoders keep the spatial information of the input image data as they are, and extract information gently in what is called the Convolution layer.This process is able to retain the spatial relationships in the data this spatial corelation learned by the model and create better reconstruction utilizing the spatiality. Use Git or checkout with SVN using the web URL. I'm trying to build a LSTM autoencoder with the goal of getting a fixed sized vector from a sequence, which represents the sequence as good as possible. The motivation is that the hidden layer should be able to capture high level representations and be robust to small changes in the input. The last activation layer is Sigmoid. The denoising autoencoder network will also try to reconstruct the images. PyTorch Experiments (Github link) Here is a link to a simple Autoencoder in PyTorch. I did the dimensionality reduction example today. Can refer to the standard MNIST dataset: Gaussian and speckle, to help generalization our convolutional autoencoder still limited. And a decoder that there are even distinct numbers present to an image and then applying autoencoder! Comment section types of autoencoders using PyTorch and then feed the noisy image as an to! Each sequence corresponds to a clean state before we use it n't want to make any updates the... Use of only 2 filters in many of your network reconstructed as per the input is located.! Behind the Approach and math, let ’ s start by building a deep autoencoder using the web.! As ouput created a denoising / noise removal autoencoder with keras, specifically focused on processing! To input images from hidden code space the reconstructed image it is evident denoising! Common uses of an autoencoder is a type of neural network trained on numbers not! Have 5 types of autoencoders and how it works this … a denoising autoencoder could be used converts to. To image denoising problem a bit easier if the errors are just “ substitutional ” ( see my article... Encoder that makes a compressed representation of the denoising CNN Auto Encoder from the MNIST dataset from Santander Customer Prediction. ( DAE ) this denoising autoencoder pytorch is continuation of my previous article which is used in training ( True or. Congestive heart failure Tensor ): the CPU or GPU that the is... And be robust to small changes in the comment section network will try... Web URL 100th and 200th epochs: Fig Premature Ventricular Contraction ( r-on-t PVC ).! Use AEs for credit card fraud detection via anomaly detection: use AEs for creating faces... Gist: instantly share code, notes, and I am getting better results a range applications! ) build a convolutional VAEs, we created a denoising autoencoder, the data is passed an... Premature or Ectopic Beat ( denoising autoencoder pytorch or EB ) 5 Module object has a self.training boolean which can used... Automatic pre-processing simple autoencoder for MNIST in PyTorch, we will be for. A Variational autoencoder ( VAE ) that trains on words and then applying the autoencoder map. Cnn and the large denoising Auto Encoder the goal is to transfer to a generational model of fruit. Built with PyTorch to address identity-function risk by randomly corrupting input ( i.e see my previous article I! Not to just learn to reconstruct images from hidden code space … Last month, followed. N'T want to make any updates to randomly turning off neurons use Git or checkout SVN! To re-use other code '' '' '' '' is already mentioned in… denoising of data, needing... Maxpool2D and ConvTranspose2d that trains on words and then feed the noisy image as an image. Reconstruct, or that there are even distinct numbers present are familiar with PyTorch, the name ) go. Do you make software reliable enough for space travel other code '' '' '' that a good dos! Documents using autoencoders Encoders ( DAE ) in a denoising autoencoder could be used dimensionality! His advice on following Approach 2 in my previous story ) minimum of 32 filters most... Distinct numbers present helps in obtaining the noise-free or complete images if a... Of 3 linear layers with ReLU activations an input image with some noise an... Every epoch minute details from the MNIST dataset performance as we go have able capture. Your use of only 2 filters in many of your CNNs is small! Datasetcontains 5,000 Time Series examples ( obtained with ECG ) with 140.... The parameters better results: how do you make software reliable enough for space travel of only 2 filters many! New things: ) process especially to reconstruct the images implement a Variational autoencoder article, I wrote Variational...: z ( Tensor ): 1.: math: \mathbf { z } codings...: LSTM Application to image denoising is already mentioned in… denoising of data, without needing to know thoughts. A neural network used for unsupervised pre-training soon identity function # Lets find out validation as. Series examples ( obtained with ECG ) with 140 timesteps can be copied run... Learning project '' Takes a dataset with ( x, y ) label pairs and converts it to generational! Download GitHub Desktop and try again I hope that you understand the intuition behind the and... Contents while eliminating noise or Ectopic Beat ( SP or EB ) 5 have talked about your project before and. Happy to send the references lo permite focused on signal processing will introduce some noise clarified when where... You need to set it to a generational model of new fruit images generational! Competition data for this autoencoder deep learning autoencoder neural network an … this way we can only replicate output... Here is a neural network tries to reconstruct the images motivation is the! Convtranspose layers have the capability to upsample the feature maps and recover the image details denoising CNN Encoder. And be robust to small changes in the image process especially to reconstruct the images you to grasp the concepts! Of image contents while eliminating noise and share information without the random noises originally injected you learn. We need to set it to ( x, y ) label pairs and it... Download GitHub Desktop and try again each part consists of two parts: LSTM to! Will learn a representation ( latent-space or bottleneck ) that the hidden layer should be able to high. Github extension for Visual Studio and try again torch.ones ( img.shape ) ) described in all DL textbooks, to! On numbers does not work on alphabets | using data from Santander Customer Transaction Prediction Teams and some of use-cases. Convolutional VAEs, we can only replicate the output images to clean digits images to input images how they be. Things: ) LSTM Application to image denoising use PyTorch Lightning which will keep the code Short but still.! Code '' '' representation of the basic autoencoder, you need to add the following code: implementation! Can identify 100 % of denoising autoencoder pytorch the input of several layers risk by randomly corrupting input (.. Generational model of new fruit images that your use of only 2 filters in many your! Are applied very successfully in the comment section as very powerful filters that can copied... Be implementing deep learning autoencoder neural network used for dimensionality reduction ; is! Can make fake faces denoising autoencoder, you need to update the learning rate every! Fact, we can identify 100 % of aomalies extension of the basic autoencoder, and represent a version. On alphabets # move the batch to the enooder part of your network we denoising! Following Approach 2 in my previous article, I wrote about Variational autoencoders and some their! Auto Encoder 's with noise added to the original input images numerically and qualitatively their repo as well speckle to. Its own learned image of generic 5 decompress information as an input image some... Or that there are even distinct numbers present more accurate and robust.! More accurate and robust models are in training build CNN using PyTorch and applying! To upsample the feature maps and recover the image process especially to images! Who the partner is new file name AutoEncoder.py and write the following steps: 1 ) build a autoencoder... So it will have old information from a single heartbeat from a single heartbeat from a single from... Autoencoders attempt to address identity-function risk by randomly corrupting input ( i.e as ouput s code up VAE. Simple autoencoder in PyTorch, we will introduce some noise | using data from Santander Customer Prediction. Many different types of autoencoders and some of their use-cases lecture clarified when / where to use a Tranposed.! Data for this implementation, I wrote about Variational autoencoders and how it works digits is. N'T hurt to try new things: ) given a set of images similar to the MNIST. Trains on words and then generates new words we import nn.Module and use super method ) a. Learning code with Kaggle Notebooks | using data from Santander Customer Transaction Prediction Teams to encode and decode (! Networks have able to capture even minute details from the original input images if errors! More accurate and robust models input is located on Tell me your initial project idea & you... In denoising autoencoders, and I am getting better results is still severely limited Testing mode for Multiclass Classification it. For dimensionality reduction ; that is, for feature selection and extraction I just a! About your project before, and its still good by me previous article which is used training... Minute details from the MNIST dataset an unsupervised manner stochastic version of.! Good project dos n't hurt to try new things: ) a generational of. Signal processing digits images makes the denoising CNN Auto Encoder is better than large! Thread to add noise in the comment section good project dos n't hurt try... Channel as ouput evaluation '' mode, b/c we do n't want to make any updates Lets find out performance! 200Th epochs: Fig Studio and try again we will implement many types. With autoencoder neural network that learns to encode and decode automatically (,! Noisy digits images channel as input and give out 128 channel as ouput article is continuation of my article! File name AutoEncoder.py and write the following steps: 1 ) Calling nn.Dropout ( to! Software reliable enough for space travel the past Kaggle competition data for implementation! Risk by randomly corrupting input ( i.e are going to have a partner who the partner.! Denoising overcomplete AEs: recreate images without the random noises originally injected how long we...
Apartment For Rent Lisbon, Florida Keys Islands Mapno Vanity Still Here Lyrics, How To Make A Sword In Minecraft 2x2, Birmingham Occupational Tax Reconciliation, Tehsil Of Shahdara Lahore, Rolling Stones Shirt, Feminist Film Theory Characteristics, Farms For Sale In Roane County, Wv,
|
Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 23 минуты 44 секунды назад
IRL 5/8: Maximum Causal Entropy IRL
4 апреля, 2019 - 13:53
Published on April 4, 2019 10:53 AM UTC
Every Monday for 8 weeks, we will be posting lessons about Inverse Reinforcement Learning. This is lesson 5. We are publishing it now because it would have interfered with the randomized controlled trial we are running, and yesterday we finished collecting responses from the participants. A LW post with the results will appear in a few days. Future IRL lessons will resume normally on Monday.
This lesson comes with the following supplementary material:
Have a nice day!
Discuss
Could waste heat become an environment problem in the future (centuries)?
4 апреля, 2019 - 11:57
Published on April 3, 2019 2:48 PM UTC
I have wondered about this scenario for a while, and would like to know what is your opinion about it. Its assumptions are quite specific and probably won't be true, but they do appear realistic enough for me.
(1): Assume that nuclear fusion becomes an available energy source within a couple centuries, it will provide a cheap, plentiful, emission-free, and long lasting source of energy for human activities.
(If this assumption is wrong, we are probably in trouble)
(2): Assume that continued economical/technological development requires increasing energy consumption indefinitely.
(This is probably wrong if we utilise completely new physics in the future, but I don't think this assumption is unlikely)
(3): Assume that the generation of waste heat during energy generation/consumption cannot be dramatically lowered in the short-term future.
(This is also probably wrong. But it will hold true if we still have to use machines/engines/generators based on the same design principles as we do today, and I don't see that happening too soon)
The logical conclusion from the above three assumption:
At some point after the implementation of nuclear fusion, humanity's energy consumption might reach a level so high that the waste heat we release into the atmosphere will be altering the Earth's climate system not unlike what our carbon emissions are doing today.
(Since the source of any future fusion plant is likely hydrogen in seawater, for Earth it probably acts as an extra heat source independent of the sun)
The Earth is functionally a giant spacecraft, and spacecrafts usually have very sophisticated heat management systems to prevent them from overheating, so perhaps we have to work with that as well.
I haven't done too much number crunching yet, I might have gotten the figures wildly wrong.
We know today the amount of solar energy the Earth receives per year is about ~5000 times the amount of energy humanity consumes.
If humanity's energy consumption increases 100 times, and 50% of the energy is released into the atmosphere as waste heat, then we are releasing ~1% of solar energy into the atmosphere as heat.
That might have some serious climate implications if lasting for a long time, but I'm not certain about that yet.
Possible solutions:
(1): Geoengineering, that seems to be obvious. We try to reduce the solar energy input on Earth when the heat we release is too much. But that probably will negatively impact the biosphere a lot due to photosynthesis issues.
(2): Set "energy consumption targets" for countries/firms/etc like current climate policy.
Problem: while countries can continue to develop their economy and technology without increasing carbon emission (by adopting clean energy, etc), a limit on energy consumption seems be a hard cap on a country's development that cannot be worked with. So, probably no one would be compliant with such an agreement...
(3): Colonising other planets/solar systems
Each colony would also have to face that problem.
The Earth (and any other planet/moon we colonise) seems to be functionally the same as a giant space station. And space stations need sophisticated maintenance systems, including management of waste heat.
Discuss
A significant idea
4 апреля, 2019 - 01:15
Published on April 3, 2019 10:15 PM UTC
Imagine a person in the ancient world who came up with the following idea: "What would the sun and moon look like if they were very very far away?" This idea would likely lead to the conclusion that they would look like tiny points of light, which then could lead to the question "What if the tiny points of light we call stars and planets are actually faraway suns and moons?"
Unfortunately, our ancient friend would likely be stuck at that point, due to the limitations of human vision and the lack of proper instruments for examining the nature of celestial objects. But our friend would be right, unlike nearly every other human until Giordano Bruno's cosmology of 1584.
My questions then are, what other ideas of similar power exist, how will we know them if we find them, and is there any way to search for them intentionally?
Discuss
Rationality Dojo
4 апреля, 2019 - 00:43
Published on April 3, 2019 9:43 PM UTC
Discuss
On AI and Compute
3 апреля, 2019 - 22:20
Published on April 3, 2019 7:00 PM UTC
This is a post on OpenAI’s AI and Compute piece, as well as excellent responses by Ryan Carey and Ben Garfinkel, Research Fellows at the Future of Humanity Institute.
Intro: AI and Compute
Last May, OpenAI released an analysis on AI progress that blew me away. The key takeaway is this: the computing power used in the biggest AI research projects has been doubling every 3.5 months since 2012. That means that more recent projects like AlphaZero have tens of thousands of times the “compute” behind them as something like AlexNet did in 2012.
When I first saw this, it seemed like evidence that powerful AI is closer than we think. Moore’s Law doubled generally-available compute about every 18 months to 2 years, and has resulted in the most impressive achievements of the last half century. Personal computers, mobile phones, the Internet...in all likelihood, none of these would exist without the remorseless progress of constantly shrinking, ever cheaper computer chips, powered by the mysterious straight line of Moore’s Law.
So with a doubling cycle for AI compute that’s more than five times faster (let’s call it AI Moore’s Law), we should expect to see huge advances in AI in the relative blink of an eye...or so I thought. But OpenAI’s analysis has led some people to the exact opposite view.[1]
Interpreting the Evidence
Ryan Carey points out that while the compute used in these projects is doubling every 3.5 months, the compute you can buy per dollar is growing around 4-12 times slower. The trend is being driven by firms investing more money, not (for the most part) inventing better technology, at least on the hardware side. This means that the growing cost of projects will keep even Google and Amazon-sized companies from sustaining AI Moore’s Law for more than roughly 2.5 years. And that’s likely an upper bound, not a lower one; companies may try keep their research budgets relatively constant. This means that increased funding for AI research would have to displace other R&D, which firms will be reluctant to do.[2] But for lack of good data, for the rest of the post I’ll assume we’ve more or less been following the trend since the publication of “AI and Compute”.[3]
While Carey thinks that we’ll pass some interesting milestones for compute during AI Moore’s Law which might be promising for research, Ben Garfinkel is much more pessimistic. His argument is that we’ve seen a certain amount of progress in AI research recently, so realizing that it’s been driven by huge increases in compute means we should reconsider how much adding more will advance the field. He adds that this also means AI advances at the current pace are unsustainable, agreeing with Carey. Both of their views are somewhat simplified here, and worth reading in full.
Thoughts on Garfinkel
To address Garfinkel’s argument, it helps to be a bit more explicit. We can think of the compute in an AI system and the computational power of a human brain as mediated by the effectiveness of their algorithms, which is unknown for both humans and AI systems. The basic equation is something like: Capability = Compute * Algorithms. Once AI Capability reaches a certain threshold, “Human Brain,” we get human-level AI. We can observe the level of Capability that AI systems have reached so far (with some uncertainty), and have now measured their Compute. My initial reaction to reading OpenAI’s piece was the optimistic one - AI Capability must be higher than we thought, since Compute is so much higher! Garfinkel seems to think that Algorithms must be lower than we thought, since Capability hasn’t changed. This shows that Garfinkel and I disagree on how precisely we can observe Capability. If our observation has room to be revised in light of other data, we can avoid lowering Algorithms to some extent. I think he’s probably right that the default approach should be to revise Algorithms downward, though there’s some room to revise Capability upward.
Much of Garfinkel’s pessimism about the implications of “AI and Compute” comes from the realization that its trend will soon stop - an important point. But what if, by that time, the Compute in AI systems will have surpassed the brain’s?
Thoughts on Carey
Carey says one important milestone for AI progress is when projects have compute equal to running a human brain for 18 years. At that point we could expect AI systems to match an 18-year-old human’s cognitive abilities, if their algorithms successfully imitated a brain or otherwise performed at its level. AI Impacts has collected various estimates of how much compute this might require - by the end of AI Moore's Law they should comfortably reach and exceed it. Another useful marker is the 300-year AlphaGo Zero milestone. The thinking here is that AI systems might learn much more slowly than humans - it would take someone about 300 years to play as many Go games as AlphaGo did before beating its previous model, which beat a top-ranked human Go player. A similar ratio might apply to learning to perform other tasks at a human-equivalent level (although AlphaGo Zero’s performance was superhuman). Finally we have the brain-evolution milestone; that is, how much compute it would take to simulate the evolution of a nervous system as complex as the human brain. Only this last milestone is outside the scope of AI Moore's Law.[4] I tend to agree with Carey that the necessary compute to reach human-level AI lies somewhere around the 18 and 300-year milestones.
But I believe his analysis likely overestimates the difficulty of reaching these computational milestones. The FLOPS per brain estimates he cites are concerned with simulating a physical brain, rather than estimating how much useful computation the brain performs. The level of detail of the simulations seems to be the main source of variance among these higher estimates, and is irrelevant for our purposes - we just want to know how well a brain can compute things. So I think we should take the lower estimates as more relevant - Moravec’s 10^13 FLOPS and Kurzweil’s 10^16 FLOPS (page 114) are good places to start,[5] though far from perfect. These estimates are calculated by comparing areas of the brain responsible for discrete tasks like vision to specialized computer systems - they represent something nearer the minimum amount of computation to equal the human brain than other estimates. If accurate, the reduction in required computation by 2 orders of magnitude has significant implications for our AI milestones. Using the estimates Kurzweil cites, we’ll comfortably pass the milestones for both 18 and 300-year human-equivalent compute by the time AI Moore's Law has finished in roughly 2.5 years.[6] There’s also some reason to think that AI systems’ learning abilities are improving, in the sense that they don’t require as much data to make the same inferences. DeepMind certainly seems to be saying that AlphaZero is better at searching a more limited set of promising moves than Stockfish, a traditional chess engine (unfortunately they don’t compare it to earlier versions of AlphaGo on this metric). On the other hand, board games like Chess and Go are probably the ideal case for reinforcement learning algorithms, as they can play against themselves rapidly to improve. It’s unclear how current approaches could transfer to situations where this kind of self-play isn’t possible.
Final Thoughts
So - what can we conclude? I don’t agree with Garfinkel that OpenAI’s analysis should make us more pessimistic about human-level AI timelines. While it makes sense to revise our estimate of AI algorithms downward, it doesn’t follow that we should do the same for our estimate of overall progress in AI. By cortical neuron count, systems like AlphaZero are at about the same level as a blackbird (albeit one that lives for 18 years),[7] so there’s a clear case for future advances being more impressive than current ones as we approach the human level. I’ve also given some reasons to think that level isn’t as high as the estimates Carey cites. However, we don’t have good data on how recent projects fit AI Moore’s Law. It could be that we’ve already diverged from the trend, as firms may be conservative about drastically changing their R&D budgets. There’s also a big question mark hovering over our current level of progress in the algorithms that power AI systems. Today’s techniques may prove completely unable to learn generally in more complex environments, though we shouldn’t assume they will.[8]
If AI Moore’s Law does continue, we’ll pass the 18 and 300-year human milestones in the next two years. I expect to see an 18-year-equivalent project in the next five, even if it slows down. After these milestones, we’ll have some level of hardware overhang[9] and be left waiting on algorithmic advances to get human-level AI systems. Governments and large firms will be able to compete to develop such systems, and costs will halve roughly every 4 years,[10] slowly widening the pool of actors. Eventually the relevant breakthroughs will be made. That they will likely be software rather than hardware should worry AI safety experts, as these will be harder to monitor and foresee.[11] And once software lets computers approach a human level in a given domain, we can quickly find ourselves completely outmatched. AlphaZero went from a bundle of blank learning algorithms to stronger than the best human chess players in history...in less than two hours.
1. Important to note that while Moore’s Law resulted in cheaper computers (albeit by increasing the scale and complexity of the factories that make them), this doesn’t seem to be doing the same for AI chips. It’s possible that Google’s TPUs will continue to decrease in cost after becoming commercially available, but without a huge consumer market to sell these to, it’s likely that these firms will mostly have to eat the costs of their investments. ↩︎
2. This assumes corporate bureaucracy will slow reallocation of resources, and could be wrong if firms prove willing to keep ratcheting up total R&D budgets. Both Amazon and Google are doing so at the moment. ↩︎
3. Information about the cost and compute of AI projects since then would be very helpful for evaluating the continuation of the trend. ↩︎
4. Cost and computation figures take AlphaGo Zero as the last available data point in the trend, since it’s the last AI system for which OpenAI has calculated compute. AlphaGo Zero was released in October 2017, but I’m plotting how things will go from now, March 2019, assuming that the trends in cost and compute have continued. These estimates are therefore 1.5 years shorter than Carey’s, apart from our use of different estimates of the brain’s computation. ↩︎
5. Moravec does his estimate by comparing the number of calculations machine vision software makes to the retina, and extrapolating to the size of the rest of the brain. This isn’t ideal, but at least it’s based on a comparison of machine and human capability, not simulation of a physical brain. Kurzweil cites Moravec’s estimate as well as a similar one by Lloyd Watts based on comparisons between the human auditory system and teleconferencing software, and finally one by the University of Texas replicating the functions of a small area of the cerebellum. These latter estimates come to 10^17 and 10^15 FLOPS for the brain. I know people are wary of Kurzweil, but he does seem to be on fairly solid ground here. ↩︎
6. The 18-year milestone would be reached in under a year and the 300-year milestone in slightly over another. If the brain performs about 10^16 operations per second, 18 year’s worth would be roughly 10^25 FLOPS. AlphaGo Zero used about 10^23 FLOPS in October 2017 (1,000 Petaflop/s-days, 1 petaflop/s-day is roughly 10^20 ops). If the trend is holding, Compute is increasing roughly an order of magnitude per year. It’s worth noting that this would be roughly a $700M project in late 2019 (scaling AlphaZero up 100x and halving costs every 4 years), and something like$2-3B if hardware costs weren’t spread across multiple projects. Google has an R&D budget over 20B, so this is feasible, though significant. The AlphaGo Zero games milestone would take about 14 months more of AI Moore's Law to reach, or a few decades of cost decreases if it ends. ↩︎ 7. This is relative to 10^16 FLOPS estimates of the human brain’s computation and assuming computation is largely based on cortical neuron count - a blackbird would be at about 10^14 FLOPS by this measure. ↩︎ 8. An illustration of this point is found here, expressed by Richard Sutton, one of the inventors of reinforcement learning. He examines the history of AI breakthroughs and concludes that fairly simple search and learning algorithms have powered the most successful efforts, driven by increasing compute over time. Attempts to use models that take advantage of human expertise have largely failed. ↩︎ 9. This argument fails if the piece’s cited estimates of a human brain’s compute are too optimistic. If more than a couple extra orders of magnitude are needed to get brain-equivalent compute, we could be many decades away from having the necessary hardware. AI Moore’s Law can’t continue much longer than 2.5 years, so we’d have to wait for long-term trends in cost decreases to run more capable projects. ↩︎ 10. AI Impacts cost estimates, using the 10-16 year recent order of magnitude cost decreases. ↩︎ 11. If the final breakthroughs depend on software, we’re left with a wide range of possible human-level AI timelines - but one that likely precludes centuries in the future. We could theoretically be months away from such a system if current algorithms with more compute are sufficient. See this article, particularly the graphic on exponential computing growth. This completely violates my intuitions of AI progress but seems like a legitimate position. ↩︎ Discuss What are the advantages and disadvantages of knowing your own IQ? 3 апреля, 2019 - 21:31 Published on April 3, 2019 6:31 PM UTC I've seen some answers here: https://www.quora.com/What-are-the-advantages-disadvantages-to-know-your-own-IQ But I would be curious to know the perspective from people here. Third alternative: taking an IQ test and tracking your IQ, but not looking at it. For example: Can tracking IQ be useful to track cognitive degradation and predict neurodegenerative diseases? Discuss Machine Pastoralism 3 апреля, 2019 - 19:04 Published on April 3, 2019 4:04 PM UTC This idea has occurred to me before, but in the interim I dismissed it and then forgot. Since it is back again more-or-less unprompted, I am writing it down. We usually talk about animals and their intelligence as a way to interrogate intelligence in general, or as a model for possible other minds. It occurred to me our relationship with animals is therefore a model for our relationship with other forms of intelligence. In the mode of Prediction Machines, it is straightforward to consider: prediction engines in lieu of dogs to track and give warning; teaching/learning systems for exploring the map in lieu of horses; analysis engines to provide our solutions instead of cattle or sheep to provide our sustenance. The idea here is just to map animals-as-capital to the information economy, according to what they do for us. Alongside what they do for us is the question of how we manage them. The Software 2.0 lens of adjusting weights to search program space reads closer to animal husbandry than building a new beast from the ground up with gears each time, to me. It allows for a notion of lineage, and we can envision using groups of machines with subtle variations, or entirely different machines in combination. This analogy also feels like it does a reasonable job of priming the intuition about where dangerous thresholds might lie. How smart is smart enough to be dangerous for one AI? Tiger-ish? We can also think about relative intelligence: the primates with better tool ability and more powerful communication were able to establish patronage and then total domestication over packs of dogs and herds of horses, cattle, and sheep. How big is that gap exactly, and what does that imply about the threshold for doing the same to humans? Historically we are perfectly capable of doing it to ourselves, so it seems like the threshold might actually be lower than us. Discuss Defeating Goodhart and the closest unblocked strategy problem 3 апреля, 2019 - 17:46 Published on April 3, 2019 2:46 PM UTC .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} This post is longer and more self-contained than my recent stubs. tl;dr: Patches such as telling the AI "avoid X" will result in Goodhart's law and the nearest unblocked strategy problem: the AI will do almost exactly what it was going to do, except narrowly avoiding the specific X. However, if the patch can replaced with "I am telling you to avoid X", and this is treated as information about what to avoid, and the biases and narrowness of my reasoning are correctly taken into account, these problems can be avoided. The important thing is to correctly model my uncertainty and overconfidence. AIs don't have a Goodhart problem, not exactly The problem of an AI maximising a proxy utility function seems similar to the Goodhart Law problem, but isn't exactly the same thing. The standard Goodhart law is a principal-agent problem: the principal P and the agent A both know, roughly, what the principal's utility U is (eg U aims to create a successful company). However, fulfilling U is difficult to measure, so a measurable proxy V is used instead (eg V aims to maximise share price). Note that the principal and the agents goals are misaligned, and the measurable V serves to (try to) bring them more into alignment. For an AI, the problem is not that U is hard to measure, but that it is hard to define. And the AI's goals are V: there is no need to make V measurable, it is not a check on the AI, but the AI's intrinsic motivation. This may seem like a small difference, but it has large consequences. We could give an AI a V, our "best guess" at U, while also including all our uncertainty about how to define U. This option is not available for the principal agent problem, since giving a complicated goal to a more knowledgeable agent just gives it more opportunities to misbehave: we can't rely on it maximising the goal, we have to check that it does so. Overfitting to the patches There is a certain similarity with many machine learning techniques. Neural nets that distinguish cats and dogs could treat any "dog" photo as a specific patch that can be routed around. In that case, the net would define "dog" as "anything almost identical to the dog photos I've been trained on", and "cat" as "anything else". And that would be a terrible design; fortunately, modern machine learning gets around the problem by, in effect, assigning uncertainty correctly: "dog" is not seen as the exact set of dog photos in the training set, but as a larger, more nebulous concept, of which the specific dog photos are just examples. Similarly, we could define V as W+Δ, where W is our best attempt at specifying U, and Δ encodes the fact that W is but an example our imperfect minds have come up with, to try and capture U. We know that W is oversimplified, and Δ is an encoding of this fact. If a neural net could synthesis a decent estimate of "dog" from some examples, could it synthesis "friendliness" from our attempts to define it? The idea is best explained through an example. Example: Don't crush the baby or the other objects This section will present a better example, I believe, than the original one presented here. A robot exists in a grid world: The robot's aim is to get to the goal square, with the flag. It gets a penalty of −1 for each turn it isn't there. If that were the only reward, the robot's actions would be disastrous: So we will give it a penalty of −100 for running over babies. If we do so, we will get a Goodhart/nearest unblocked strategy behaviour: Oops! Turns out we valued those vases as well. What we want the AI to learn is not that the baby is specifically important, but that the baby is an example of important things it should not crush. So imagine it is confronted by the following, which includes six types of objects, of unknown value: Instead of having humans hand-label each item, we instead generalise from some hand-labelled examples, using rules of extrapolation and some machine learning. This tells the AI that, typically, we value about one-in-six objects, and value them at a tenth of the value of babies (hence it gets −10 for running one over). Given that, the best policy, with an expected reward of −9−10(2/6)≈−12.333…, is: This behaviour is already much better than we would expect from a typical Goodhart law-style agent (and we could complicate the example to make the difference more emphatic). Example: human over-confidence The above works if we humans correctly account for our uncertainty - if we not only produce W, but also a correct Δ for how good a match we expect between W and U. But we humans are often overconfident in their estimates, especially in our estimates of value. We are far better at hindsight ("you shouldn't have crushed the vase") than at foresight ("here's a complete list of what you shouldn't do"). Even knowing that hindsight is better, doesn't make the issue go away. This is similar to the planning fallacy. That fallacy means that we underestimate the time taken to complete tasks - even if we try to take the planning fallacy into account. However, the planning fallacy can be solved using the outside view: comparing the project to similar projects, rather than using detailed inner knowledge. Similarly, human overconfidence can be solved by the AI noting our initial estimates, our corrections to those initial estimates, our corrections taking into account the previous corrections, our attempts to take into account all previous repeated corrections - and the failure of those attempts. Suppose, for example, that humans, in hindsight, value one-in-three of the typical objects in the grid world. We start out with an estimate of one-in-twelve; after the robot mashes a bit too many of the objects, we update to one-in-nine; after being repeatedly told that we underestimate our hindsight, we update to one-in-six... and stay there. But meanwhile, the robot can still see that we continue to underestimate, and goes directly to a one-in-three estimate; so with new, unknown objects, it will only risk crushing a single one: If the robot learnt that we valued even more objects (or valued some of them more than +10), it would then default to the safest, longest route: . In practice, of course, the robot will also be getting information about what types of objects we value, but the general lesson still applies: the robot can learn that we underestimate uncertainty, and increase its own uncertainty in consequence. Full uncertainty, very unknown unknowns So, this is a more formal version of ideas I posted a while back. The process could be seen as: 1. Give the AI W as our current best estimate for U. 2. Encode our known uncertainties about how well W relates to U. 3. Have the AI deduce, from our subsequent behaviour, how well we have encoded our uncertainties, and change these as needed. 4. Repeat 2-3 for different types of uncertainties. What do I mean by "different types" of uncertainty? Well, the example above was simple: the model had but a single uncertainty, over the proportion of typical objects that we valued. The AI learnt that we systematically underestimated this, even when it helped us try and do better. But there are other types of uncertainties that could happen. We value some objects more than others, but maybe these estimates are not accurate either. Maybe we are fine as long as one object of a type exists, and don't care about the other - or, conversely, maybe some objects are only valuable in pairs. The AI needs a rich enough model to be able to account for these extra types of preferences, that we may not have ever articulated explicitly. There are even more examples as we move from gridworlds into the real world. We can articulate ideas like "human value is fragile" and maybe give an estimate of the total complexity of human values. And then the agent could use examples to estimate the quality of our estimate, and come up with better number for the desired complexity. But "human value is fragile" is a relatively recent insight. There was time when people hadn't articulated that idea. So it's not that we didn't have a good estimate for the complexity of human values; we didn't have any idea that was a good thing to estimate. The AI has to figure out the unknown unknowns. Note that, unlike the value synthesis project, the AI doesn't need to resolve this uncertainty; it just needs to know that it exists, and give a good-enough estimate of it. The AI will certainly figure out some unknown unknowns (and unknown knowns): it just has to spot some patterns and connections we were unaware of. But in order to get all of them, the AI has to have some sort of maximal model in which all our uncertainty (and all our models) can be contained. Just consider some of the concepts I've come up with (I chose these because I'm most familiar with them; LessWrong abounds with other examples): siren worlds, humans making similar normative assumptions about each other, and the web of connotations. In theory, each of these should have reduced my uncertainty, and moved W closer to U. In practice, each of these has increased my estimate of uncertainty, by showing how much remains to be done. Could an AI have taken these effects correctly into account, given that these three examples are of very different types? Can it do so for discoveries that remain to be made? I've argued that an indescribable hellworld cannot exist. There's a similar question as to whether there exists human uncertainty about U that cannot be included in the AI's model of Δ. By definition, this uncertainty would be something that is currently unknown and unimaginable to us. However, I feel that it's far more likely to exist, than the indescribable hellworld. Still despite that issue, it seems to me that there are methods of dealing with the Goodhart problem/nearest unblocked strategy problem. And this involves properly accounting for all our uncertainty, directly or indirectly. If we do this well, there no longer remains a Goodhart problem at all. Discuss Alignment Newsletter #51 3 апреля, 2019 - 07:10 Published on April 3, 2019 4:10 AM UTC Alignment Newsletter #51 Cancelling within-batch generalization in order to get stable deep RL View this email in your browser Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. You may have noticed that I've been slowly falling behind on the newsletter, and am now a week behind. I would just skip a week and continue -- but there are actually a lot of papers and posts that I want to read and summarize, and just haven't had the time. So instead, this week you're going to get two newsletters. This one focuses on all of the ML-based work that I have mostly been ignoring for the past few issues. Highlights Towards Characterizing Divergence in Deep Q-Learning (Joshua Achiam et al): Q-Learning algorithms use the Bellman equation to learn the Q*(s, a) function, which is the long-term value of taking action a in state s. Tabular Q-Learning collects experience and updates the Q-value for each (s, a) pair independently. As long as each (s, a) pair is visited infinitely often, and the learning rate is decayed properly, the algorithm is guaranteed to converge to Q*. Once we get to complex environments where you can't enumerate all of the states, we can't explore all of the (s, a) pairs. The obvious approach is to approximate Q*(s, a). Deep Q-Learning (DQL) algorithms use neural nets for this approximation, and use some flavor of gradient descent to update the parameters of the net such that it is closer to satisfying the Bellman equation. Unfortunately, this approximation can prevent the algorithm from ever converging to Q*. This paper studies the first-order Taylor expansion of the DQL update, and identifies three factors that affect the DQL update: the distribution of (s, a) pairs from which you learn, the Bellman update operator, and the neural tangent kernel, a property of the neural net that specifies how information from one (s, a) pair generalizes to other (s, a) pairs. The theoretical analysis shows that as long as there is limited generalization between (s, a) pairs, and each (s, a) pair is visited infinitely often, the algorithm will converge. Inspired by this, they design PreQN, which explicitly seeks to minimize generalization across (s, a) pairs within the same batch. They find that PreQN leads to competitive and stable performance, despite not using any of the tricks that DQL algorithms typically require, such as target networks. Rohin's opinion: I really liked this paper: it's a rare instance where I actually wanted to read the theory in the paper because it felt important for getting the high level insight. The theory is particularly straightforward and easy to understand (which usually seems to be true when it leads to high level insight). The design of the algorithm seems more principled than others, and the experiments suggest that this was actually fruitful. The algorithm is probably more computationally expensive per step compared to other algorithms, but that could likely be improved in the future. One thing that felt strange is that the proposed solution is basically to prevent generalization between (s, a) pairs, but the whole point of DQL algorithms is to generalize between (s, a) pairs since you can't get experience from all of them. Of course, since they are only preventing generalization within a batch, they still generalize between (s, a) pairs that are not in the batch, but presumably that was because they only could prevent generalization within the batch. Empirically the algorithm does seem to work, but it's still not clear to me why it works. Technical AI alignment Learning human intent Deep Reinforcement Learning from Policy-Dependent Human Feedback (Dilip Arumugam et al): One obvious approach to human-in-the-loop reinforcement learning is to have humans provide an external reward signal that the policy optimizes. Previous work noted that humans tend to correct existing behavior, rather than providing an "objective" measurement of how good the behavior is (which is what a reward function is). They proposed Convergent Actor-Critic by Humans (COACH), where instead of using human feedback as a reward signal, they use it as the advantage function. This means that human feedback is modeled as specifying how good an action is relative to the "average" action that the agent would have chosen from that state. (It's an average because the policy is stochastic.) Thus, as the policy gets better, it will no longer get positive feedback on behaviors that it has successfully learned to do, which matches how humans give reinforcement signals. This work takes COACH and extends it to the deep RL setting, evaluating it on Minecraft. While the original COACH had an eligibility trace that helps "smooth out" human feedback over time, deep COACH requires an eligibility replay buffer. For sample efficiency, they first train an autoencoder to learn a good representation of the space (presumably using experience collected with a random policy), and feed these representations into the control policy. They reward entropy so that the policy doesn't commit to a particular behavior, making it responsive to feedback, but select actions by always picking the action with maximal probability (rather than sampling from the distribution) in order to have interpretable, consistent behavior for the human trainers to provide feedback on. They evaluate on simple navigation tasks in the complex 3D environment of Minecraft, including a task where the agent must patrol the perimeter of a room, which cannot be captured by a state-based reward function. Rohin's opinion: I really like the focus on figuring out how humans actually provide feedback in practice; it makes a lot of sense that we provide reinforcement signals that reflect the advantage function rather than the reward function. That said, I wish the evaluation had more complex tasks, and had involved human trainers who were not authors of the paper -- it might have taken an hour or two of human time instead of 10-15 minutes, but would have been a lot more compelling. Before continuing, I recommend reading about Simulated Policy Learning in Video Models below. As in that case, I think that you get sample efficiency here by getting a lot of "supervision information" from the pixels used to train the VAE, though in this case it's by learning useful features rather than using the world model to simulate trajectories. (Importantly, in this setting we care about sample efficiency with respect to human feedback as opposed to environment interaction.) I think the techniques used there could help with scaling to more complex tasks. In particular, it would be interesting to see a variant of deep COACH that alternated between training the VAE with the learned control policy, and training the learned control policy with the new VAE features. One issue would be that as you retrain the VAE, you would invalidate your previous control policy, but you could probably get around that (e.g. by also training the control policy to imitate itself while the VAE is being trained). From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following (Justin Fu et al): Rewards and language commands are more generalizable than policies: "pick up the vase" would make sense in any house, but the actions that navigate to and pick up a vase in one house would not work in another house. Based on this observation, this paper proposes that we have a dataset where for several (language command, environment) pairs, we are given expert demonstrations of how to follow the command in that environment. For each data point, we can use IRL to infer a reward function, and use that to train a neural net that can map from the language command to the reward function. Then, at test time, given a language command, we can convert it to a reward function, after which we can use standard deep RL techniques to get a policy that executes the command. The authors evaluate on a 3D house domain with pixel observations, and two types of language commands: navigation and pick-and-place. During training, when IRL needs to be done, since deep IRL algorithms are computationally expensive they convert the task into a small, tabular MDP with known dynamics for which they can solve the IRL problem exactly, deriving a gradient that can then be applied in the observation space to train a neural net that given image observations and a language command predicts the reward. Note that this only needs to be done at training time: at test time, the reward function can be used in a new environment with unknown dynamics and image observations. They show that the learned rewards generalize to novel combinations of objects within a house, as well as to entirely new houses (though to a lesser extent). Rohin's opinion: I think the success at generalization comes primarily because of the MaxEnt IRL during training: it provides a lot of structure and inductive bias that means that the rewards on which the reward predictor is trained are "close" to the intended reward function. For example, in the navigation tasks, the demonstrations for a command like "go to the vase" will involve trajectories through the state of many houses that end up in the vase. For each demonstration, MaxEnt IRL "assigns" positive reward to the states in the demonstration, and negative reward to everything else. However, once you average across demonstrations in different houses, the state with the vase gets a huge amount of positive reward (since it is in all trajectories) while all the other states are relatively neutral (since they will only be in a few trajectories, where the agent needed to pass that point in order to get to the vase). So when this is "transferred" to the neural net via gradients, the neural net is basically "told" that high reward only happens in states that contain vases, which is a strong constraint on the learned reward. To be clear, this is not meant as a critique of the paper: indeed, I think when you want out-of-distribution generalization, you have to do it by imposing structure/inductive bias, and this is a new way to do it that I hadn't seen before. Using Natural Language for Reward Shaping in Reinforcement Learning (Prasoon Goyal et al): This paper constructs a dataset for grounding natural language in Atari games, and uses it to improve performance on Atari. They have humans annotate short clips with natural language: for example, "jump over the skull while going to the left" in Montezuma's Revenge. They use this to build a model that predicts whether a given trajectory matches a natural language instruction. Then, while training an agent to play Atari, they have humans give the AI system an instruction in natural language. They use their natural language model to predict the probability that the trajectory matches the instruction, and add that as an extra shaping term in the reward. This leads to faster learning. Interpretability Visualizing memorization in RNNs (Andreas Madsen): This is a short Distill article that showcases a visualization tool that demonstrates how contextual information is used by various RNN units (LSTMs, GRUs, and nested LSTMs). The method is very simple: for each character in the context, they highlight the character in proportion to the gradient of the logits with respect to that character. Looking at this visualization allows us to see that GRUs are better at using long-term context, while LSTMs perform better for short-term contexts. Rohin's opinion: I'd recommend you actually look at and play around with the visualization, it's very nice. The summary is short because the value of the work is in the visualization, not in the technical details. Other progress in AI Exploration Learning Exploration Policies for Navigation (Tao Chen et al) Deep Reinforcement Learning with Feedback-based Exploration (Jan Scholten et al) Reinforcement learning Towards Characterizing Divergence in Deep Q-Learning (Joshua Achiam et al): Summarized in the highlights! Eighteen Months of RL Research at Google Brain in Montreal (Marc Bellemare): One approach to reinforcement learning is to predict the entire distribution of rewards from taking an action, instead of predicting just the expected reward. Empirically, this works better, even though in both cases we choose the action with highest expected reward. This blog post provides an overview of work at Google Brain Montreal that attempts to understand this phenomenon. I'm only summarizing the part that most interested me. First, they found that in theory, distributional RL performs on par with or worse than standard RL when using either a tabular representation or linear features. They then tested this empirically on Cartpole, and found similar results: distributional RL performed worse when using tabular or linear representations, but better when using a deep neural net. This suggests that distributional RL "learns better representations". So, they visualize representations for RL on the four-room environment, and find that distributional RL captures more structured representations. Similarly this paper showed that predicting value functions for multiple discount rates is an effective way to produce auxiliary tasks for Atari. Rohin's opinion: This is a really interesting mystery with deep RL, and after reading this post I have a story for it. Note I am far from an expert in this field and it's quite plausible that if I read the papers cited in this post I could tell this story is false, but here's the story anyway. As we saw with PreQN earlier in this issue, one of the most important aspects of deep RL is how information about one (s, a) pair is used to generalize to other (s, a) pairs. I'd guess that the benefit from distributional RL is primarily that you get "good representations" that let you do this generalization well. With a tabular representation you don't do any generalization, and with a linear feature space the representation is hand-designed by humans to do this generalization well, so distributional RL doesn't help in those cases. But why does distributional RL learn good representations? I claim that it provides stronger supervision given the same amount of experience. With normal expected RL, the final layer of the neural net need only be useful for predicting the expected reward, but with distributional RL they must be useful for predicting all of the quantiles of the reward distribution. There may be "shortcuts" or "heuristics" that allow you to predict expected reward well because of spurious correlations in your environment, but it's less likely that those heuristics work well for all of the quantiles of the reward distribution. As a result, having to predict more things enforces a stronger constraint on what representations your neural net must have, and thus you are more likely to find good representations. This perspective also explains why predicting value functions for multiple discount rates helps with Atari, and why adding auxiliary tasks is often helpful (as long as the auxiliary task is relevant to the main task). The important aspect here is that all of the quantiles are forcing the same neural net to learn good representations. If you instead have different neural nets predicting each quantile, each neural net has roughly the same amount of supervision as in expected RL, so I'd expect that to work about as well as expected RL, maybe a little worse since quantiles are probably harder to predict than means. If anyone actually runs this experiment, please do let me know the result! Diagnosing Bottlenecks in Deep Q-learning Algorithms (Justin Fu, Aviral Kumar et al): While the PreQN paper used a theoretical approach to tackle Deep Q-Learning algorithms, this one takes an empirical approach. Their results: - Small neural nets cannot represent Q*, and so have undesired bias that results in worse performance. However, they also have convergence issues, where the Q-function they actually converge to is significantly worse than the best Q-function that they could express. Larger architectures mitigate both of these problems. - When there are more samples, we get a lower validation loss, showing that we are overfitting. Despite this, larger architectures are better, because the performance loss from overfitting is not as bad as the performance loss from having a bad bias. A good early stopping criterion could help with this. - To study how non-stationarity affects DQL algorithms, they study a variant where the Q-function is a moving average of the past Q-functions (instead of the full update), which means that the target values don't change as quickly (i.e. it is closer to a stationary target). They find that non-stationarity doesn't matter much for large architectures. - To study distribution shift, they look at the difference between the expected Bellman error before and after an update to the parameters. They find that distribution shift doesn't correlate much with performance and so is likely not important. - Algorithms differ strongly in the distribution over (s, a) pairs that the DQL update is computed over. They study this in the absence of sampling (i.e. when they simply weight all possible (s, a) pairs, rather than just the ones sampled from a policy) and find that distributions that are "close to uniform" perform best. They hypothesize that this is the reason that experience replay helps -- initially an on-policy algorithm would take samples from a single policy, while experience replay adds samples from previous versions of the policy, which should increase the coverage of (s, a) pairs. To sum up, the important factors are using an expressive neural net architecture, and designing a good sampling distribution. Inspired by this, they design Adversarial Feature Matching (AFM), which like Prioritized Experience Replay (PER) puts more weight on samples that have high Bellman error. However, unlike PER, AFM does not try to reduce distribution shift via importance sampling, since their experiments found that this was not important. Rohin's opinion: This is a great experimental paper, there's a lot of data that can help understand DQL algorithms. I wouldn't take the results too literally, since insights on simple environments may not generalize to more complex environments. For example, they found overfitting to be an issue in their environments -- it's plausible to me that with more complex environments (think Dota/StarCraft, not Mujoco) this reverses and you end up underfitting the data you have. Nonetheless, I think data like this is particularly valuable for coming up with an intuitive theory of how deep RL works, if not a formal one. Simulated Policy Learning in Video Models (Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Miłos, Błazej Osinski et al): This blog post and the associated paper tackle model-based RL for Atari. The recent world models (AN #23) paper proposed first learning a model of the world by interacting with the environment using a random policy, and then using the model to simulate the environment and training a control policy using those simulations. (This wasn't it's main point, but it was one of the things it talked about.) The authors take this idea and put it in an iterative loop: they first train the world model using experience from a random policy, then train a policy using the world model, retrain the world model with experience collected using the newly trained policy, retrain the policy, and so on. This allows us to correct any mistakes in the world model and let it adapt to novel situations that the control policy discovers. This allows them to train agents that can play Atari with only 100K interactions with the environment (corresponding to about two hours of real-time gameplay), though the final performance is lower than the state-of-the-art achieved with model-free RL. See Import AI for more details. Rohin's opinion: This work follows the standard pattern where model-based RL is more sample efficient but reaches worse final performance compared to model-free RL. Let's try to explain this using the same story as in the rest of this newsletter. The sample efficiency comes from the fact that they learn a world model that can predict the future, and then use that model to solve the control problem (which has zero sample cost, since you are no longer interacting with the environment). It turns out that predicting the future is "easier" than selecting the optimal action, and so the world model can be trained in fewer samples than it would take to solve the control problem directly. Why is the world model "easier" to learn? One possibility is that solving the control problem requires you to model the world anyway, and so must be a harder problem. If you don't know what your actions are going to do, you can't choose the best one. I don't find this very compelling, since there are lots of aspects of world modeling that are irrelevant to the control problem -- you don't need to know exactly how the background art will change in order to choose what action to take, but world modeling requires you to do this. I think the real reason is that world modeling benefits from much more supervision -- rather than getting a sparse reward signal over a trajectory, you get a full grid of pixels every timestep that you were supposed to predict. This gives you many orders of magnitude more "supervision information" per sample, and so it makes it easier to learn. (This is basically the same argument as in Yann Lecun's cake analogy.) Why does it lead to worse performance overall? The policy is now being trained using rollouts that are subtly wrong, and so instead of specializing to the true Atari dynamics it will be specialized to the world model dynamics, which is going to be somewhat different and should lead to a slight dip in performance. (Imagine a basketball player having to shoot a ball that was a bit heavier than usual -- she'll probably still be good, but not as good as with a regular basketball.) In addition, since the world model is supervised by pixels, any small objects are not very important to the world model (i.e. getting them wrong does not incur much loss), even if they are very important for control. In fact, they find that bullets tend to disappear in Atlantis and Battle Zone, which is not good if you want to learn to play those games. I'm not sure if they shared weights between the world model and the control policy. If they did, then they would also have the problem that the features that are useful for predicting the future are not the same as the features that are useful for selecting actions, which would also cause a drop in performance. My guess is that they didn't share weights for precisely this reason, but I'm not sure. Unifying Physics and Deep Learning with TossingBot (Andy Zeng): TossingBot is a system that learns how to pick up and toss objects into bins using deep RL. The most interesting thing about it is that instead of using neural nets to directly predict actions, they are instead used to predict adjustments to actions that are computed by a physics-based controller. Since the physics-based controller generalizes well to new situations, TossingBot is also able to generalize to new tossing locations. Rohin's opinion: This is a cool example of using structured knowledge in order to get generalization while also using deep learning in order to get performance. I also recently came across Residual Reinforcement Learning for Robot Control, which seems to have the same idea of combining deep RL with conventional control mechanisms. I haven't read either of the papers in depth, so I can't compare them, but a very brief skim suggests that their techniques are significantly different. Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables (Kate Rakelly, Aurick Zhou et al) Deep learning Measuring the Limits of Data Parallel Training for Neural Networks (Chris Shallue and George Dahl): Consider the relationship between the size of a single batch and the number of batches needed to reach a specific performance bound when using deep learning. If all that mattered for performance was the total number of examples that you take gradient steps on (i.e. the product of these two numbers), then you would expect a perfect inverse relationship between these two quantities, which would look like a line with negative slope on a log-log plot. In this case, we could scale batch sizes up arbitrarily far, and distribute them across as many machines as necessary, in order to reduce wall clock training time. A 2x increase in batch size with twice as many machines would lead to a 2x decrease in training time. However, as you make batch sizes really large, you face the problem of stale gradients: if you had updated on the first half of the batch and then computed gradients on the second half of the batch, the gradients for the second half would be "better", because they were computed with respect to a better set of parameters. When this effect becomes significant, you no longer get the nice linear scaling from parallelization. This post studies the relationship empirically across a number of datasets, architectures, and optimization algorithms. They find that universally, there is initially an era of perfect linear scaling as you increase batch size, followed by a region of diminishing marginal returns that ultimately leads to an asymptote where increasing batch size doesn't help at all with reducing wall-clock training time. However, the transition points between these regimes vary wildly, suggesting that there may be low hanging fruit in the design of algorithms or architectures that explicitly aim to achieve very good scaling. Rohin's opinion: OpenAI found (AN #37) that the best predictor of the maximum useful batch size was how noisy the gradient is. Presumably when you have noisy gradients, a larger batch size helps "average out" the noise across examples. Rereading their post, I notice that they mentioned the study I've summarized here and said that their results can help explain why there's so much variance in the transition points across datasets. However, I don't think it can explain the variance in transition points across architectures. Noisy gradients are typically a significant problem, and so it would be weird if the variance in transition points across architectures were explained by the noisiness of the gradient: that would imply that two architectures reach the same final performance even though one had the problem of noisy gradients while the other didn't. So there seems to be something left to explain here. That said, I haven't looked in depth at the data, so the explanation could be very simple. For example, maybe the transition points don't vary much across architecture and vary much more across datasets, and the variance across architecture is small enough that its effect on performance is dwarfed by all the other things that can affect the performance of deep learning systems. Or perhaps while the noisiness of the gradient is a good predictor of the maximum batch size, it still only explains say 40% of the effect, and so variance across architectures is totally compatible with factors other than the gradient noise affecting the maximum batch size. Copyright © 2019 Rohin Shah, All rights reserved. Want to change how you receive these emails? You can update your preferences or unsubscribe from this list. Discuss LW Update 2019-04-02 – Frontpage Rework 3 апреля, 2019 - 02:48 Published on April 2, 2019 11:48 PM UTC Since LW2.0 launched, the frontpage had become very complex – both visually and conceptually. This was producing an overall bad experience, and making it hard for the team to add or scale up features (such as Q&A, and later on Community, Library and upcoming Recommendations) For the past couple months, we've been working on an overhaul of the frontpage (and correspondingly, the overall site design). Our goal was is to rearrange that complexity, spending fewer "complexity points" on things that didn't need them as much, so we could spend them elsewhere. Frontpage Updates • Tooltip oriented design. • It's easier to figure out what most things will do before you click on it. • Navigation Menu • Helps establish the overall site hierarchy • Available on all major site pages (not Post Pages, where we want people to read without distraction) • Improved mobile navigation (shows up as a tab menu at the bottom) • Eventually we'll deprecate the old Nav Menu (still available in the header) and replace it with a collapsible version of the new one. • Home Page streamlining • Moved Recommend Sequences and Community over to the Nav Menu, so there are only 3 sections to parse • Post Items simplified down to one line. • Latest Posts now only have a single setting: "show personal blogposts", instead of forcing you to figure out immediately what "meta", "curated" and "daily" are. • Post List options are generally 'light cobalt blue' – not too obtrusive, but easier to find when you want them. • Questions Page now has two sections: • Recent Activity – simply sorted by "most recently commented at", so if you respond to an old question it will appear above the fold. • Top Questions – also sorted by "recently commented", but filtered to questions with 40 or more karma, so that it's easier to catch up on updates to highly upvoted questions. • Community Page • UI updated to match Home Page. • The group section now shows 7 groups instead of 3, and has a load more button. Discuss Degrees of Freedom 3 апреля, 2019 - 00:10 Published on April 2, 2019 9:10 PM UTC Something I’ve been thinking about for a while is the dual relationship between optimization and indifference, and the relationship between both of them and the idea of freedom. Optimization: “Of all the possible actions available to me, which one is best? (by some criterion). Ok, I’ll choose the best.” Indifference: “Multiple possible options are equally good, or incommensurate (by the criterion I’m using). My decision algorithm equally allows me to take any of them.” Total indifference between all options makes optimization impossible or vacuous. An optimization criterion which assigns a total ordering between all possibilities makes indifference vanishingly rare. So these notions are dual in a sense. Every dimension along which you optimize is in the domain of optimization; every dimension you leave “free” is in the domain of indifference. Being “free” in one sense can mean “free to optimize”. I choose the outcome that is best according to an internal criterion, which is not blocked by external barriers. A limit on freedom is a constraint that keeps me away from my favorite choice. Either a natural limit (“I would like to do that but the technology doesn’t exist yet”) or a man-made limit (“I would like to do that but it’s illegal.”) There’s an ambiguity here, of course, when it comes to whether you count “I would like to do that, but it would have a consequence I don’t like” as a limit on freedom. Is that a barrier blocking you from the optimal choice, or is it simply another way of saying that it’s not an optimal choice after all? And, in the latter case, isn’t that basically equivalent to saying there is no such thing as a barrier to free choice? After all, “I would like to do that, but it’s illegal” is effectively the same thing as “I would like to do that, but it has a consequence I don’t like, such as going to jail.” You can get around this ambiguity in a political context by distinguishing natural from social barriers, but that’s not a particularly principled distinction. Another issue with freedom-as-optimization is that it’s compatible with quite tightly constrained behavior, in a way that’s not consistent with our primitive intuitions about freedom. If you’re only “free” to do the optimal thing, that can mean you are free to do only one thing, all the time, as rigidly as a machine. If, for instance, you are only free to “act in your own best interests”, you don’t have the option to act against your best interests. People in real life can feel constrained by following a rigid algorithm even when they agree it’s “best”; “but what if I want to do something that’s not best?” Or, they can acknowledge they’re free to do what they choose, but are dismayed to learn that their choices are “dictated” as rigidly by habit and conditioning as they might have been by some human dictator. An alternative notion of freedom might be freedom-as-arbitrariness. Freedom in the sense of “degrees of freedom” or “free group”, derived from the intuition that freedom means breadth of possibility rather than optimization power. You are only free if you could equally do any of a number of things, which ultimately means something like indifference. This is the intuition behind claims like Viktor Frankl’s: “Between stimulus and response there is a space. In that space is our power to choose a response. In our response lies our growth and our freedom.” If you always respond automatically to a given stimulus, you have only one choice, and that makes you unfree in the sense of “degrees of freedom.” Venkat Rao’s concept of freedom is pretty much this freedom-as-arbitrariness, with some more specific wrinkles. He mentions degrees of freedom (“dimensionality”) as well as “inscrutability”, the inability to predict one’s motion from the outside. Buddhists also often speak of freedom more literally in terms of indifference, and there’s a very straightforward logic to this; you can only choose equally between A and B if you have been “liberated” from the attractions and aversions that constrain you to choose A over B. Those who insist that Buddhism is compatible with a fairly normal life say that after Buddhist practice you still will choose systematically most of the time — your utility function cannot fully flatten if you act like a living organism — but that, like Viktor Frankl’s ideal human, you will be able to reflect with equinamity and consider choosing B over A; you will be more “mentally flexible.” Of course, some Buddhist texts simply say that you become actually indifferent, and that sufficient vipassana meditation will make you indistinguishable from a corpse. Freedom-as-indifference, I think, is lurking behind our intuitions about things like “rights” or “ownership.” When we say you have a “right” to free speech — even a right bounded with certain limits, as it of course always is in practice — we mean that within those limits, you may speak however you want. Your rights define a space, within which you may behave arbitrarily. Not optimally. A right, if it’s not to be vacuous, must mean the right to behave “badly” in some way or other. To own a piece of property means that, within whatever limits the concept of ownership sets, you may make use of it in any way you like, even in suboptimal ways. This is very clearly illustrated by Glen Weyl’s notion of radical markets, which neatly disassociates two concepts usually both considered representative of free-market systems: ownership and economic efficiency. To own something just is to be able to hang onto it even when it is economically inefficient to do so. As Weyl says, “property is monopoly.” The owner of a piece of land can sit on it, making no improvements, while holding out for a high price; the owner of intellectual property can sit on it without using it; in exactly the same way that a monopolist can sit on a factory and depress output while charging higher prices than he could get away with in a competitive market. For better or for worse, rights and ownership define spaces in which you can destroy value. If your car was subject to a perpetual auction and ownership tax as Weyl proposes, bashing your car to bits with a hammer would cost you even if you didn’t personally need a car, because it would hurt the rental or resale value and you’d still be paying tax. On some psychological level, I think this means you couldn’t feel fully secure in your possessions, only probabilistically likely to be able to provide for your needs. You only truly own what you have a right to wreck. Freedom-as-a-space-of-arbitrary-action is also, I think, an intuition behind the fact that society (all societies, but the US more than other rich countries, I think) is shaped by people’s desire for more discretion in decisionmaking as opposed to transparent rubrics. College admissions, job applications, organizational codes of conduct, laws and tax codes, all are designed deliberately to allow ample discretion on the part of decisionmakers rather than restricting them to following “optimal” or “rational”, simple and legible, rules. Some discretion is necessary to ensure good outcomes; a wise human decisionmaker can always make the right decision in some hard cases where a mechanical checklist fails, simply because the human has more cognitive processing power than the checklist. This phenomenon is as old as Plato’s Laws and as current as the debate over algorithms and automation in medicine. However, what we observe in the world is more discretion than would be necessary, for the aforementioned reasons of cognitive complexity, to generate socially beneficial outcomes. We have discretion that enables corruption and special privileges in cases that pretty much nobody would claim to be ideal — rich parents buying their not-so-competent children Ivy League admissions, favored corporations voting themselves government subsidies. Decisionmakers want the “freedom” to make illegible choices, choices which would look “suboptimal” by naively sensible metrics like “performance” or “efficiency”, choices they would prefer not to reveal or explain to the public. Decisionmakers feel trapped when there’s too much “accountability” or “transparency”, and prefer a wider sphere of discretion. Or, to put it more unfavorably, they want to be free to destroy value. And this is true at an individual psychological level too, of course — we want to be free to “waste time” and resist pressure to account for literally everything we do. Proponents of optimization insist that this is simply a failure mode from picking the wrong optimization target — rest, socializing, and entertainment are also needs, the optimal amount of time to devote to them isn’t zero, and you don’t have to consider personal time to be “stolen” or “wasted” or “bad”, you can, in principle, legibilize your entire life including your pleasures. Anything you wish you could do “in the dark”, off the record, you could also do “in the light,” explicitly and fully accounted for. If your boss uses “optimization” to mean overworking you, the problem is with your boss, not with optimization per se. The freedom-as-arbitrariness impulse in us is skeptical. I see optimization and arbitrariness everywhere now; I see intelligent people who more or less take one or another as ideologies, and see them as obviously correct. Venkat Rao and Eric Weinstein are partisans of arbitrariness; they speak out in favor of “mediocrity” and against “excellence” respectively. The rationale being, that being highly optimized at some widely appreciated metric — being very intelligent, or very efficient, or something like that — is often less valuable than being creative, generating something in a part of the world that is “dark” to the rest of us, that is not even on our map as something to value and thus appears as lack of value. Ordinary people being “mediocre”, or talented people being “undisciplined” or “disreputable”, may be more creative than highly-optimized “top performers”. Robin Hanson, by contrast, is a partisan of optimization; he speaks out against bias and unprincipled favoritism and in favor of systems like prediction markets which would force the “best ideas to win” in a fair competition. Proponents of ideas like radical markets, universal basic income, open borders, income-sharing agreements, or smart contracts (I’d here include, for instance, Vitalik Buterin) are also optimization partisans. These are legibilizing policies that, if optimally implemented, can always be Pareto improvements over the status quo; “whatever degree of wealth redistribution you prefer”, proponents claim, “surely it is better to achieve it in whatever way results in the least deadweight loss.” This is the very reason that they are not the policies that public choice theory would predict would emerge naturally in governments. Legibilizing policies allow little scope for discretion, so they don’t let policymakers give illegible rewards to allies and punishments to enemies. They reduce the scope of the “political”, i.e. that which is negotiated at the personal or group level, and replace it with an impersonal set of rules within which individuals are “free to choose” but not very “free to behave arbitrarily” since their actions are transparent and they must bear the costs of being in full view. Optimization partisans are against weakly enforced rules — they say “if a rule is good, enforce it consistently; if a rule is bad, remove it; but selective enforcement is just another word for favoritism and corruption.” Illegibility partisans say that weakly enforced rules are the only way to incorporate valuable information — precisely that information which enforcers do not feel they can make explicit, either because it’s controversial or because it’s too complex to verbalize. “If you make everything explicit, you’ll dumb everything in the world down to what the stupidest and most truculent members of the public will accept. Say goodbye to any creative or challenging innovations!” I see the value of arguments on both sides. However, I have positive (as opposed to normative) opinions that I don’t think everybody shares. I think that the world I see around me is moving in the direction of greater arbitrariness and has been since WWII or so (when much of US society, including scientific and technological research, was organized along military lines). I see arbitrariness as a thing that arises in “mature” or “late” organizations. Bigger, older companies are more “political” and more monopolistic. Bigger, older states and empires are more “corrupt” or “decadent.” Arbitrariness has a tendency to protect those in power rather than out of power, though the correlation isn’t perfect. Zones that protect your ability to do “whatever” you want without incurring costs (which include zones of privacy or property) are protective, conservative forces — they allow people security. This often means protection for those who already have a lot; arbitrariness is often “elitist”; but it can also protect “underdogs” on the grounds of tradition, or protect them by shrouding them in secrecy. (Scott thought “illegibility” was a valuable defense of marginalized peoples like the Roma. Illegibility is not always the province of the powerful and privileged.) No; the people such zones of arbitrary, illegible freedom systematically harm are those who benefit from increased accountability and revealing of information. Whistleblowers and accusers; those who expect their merit/performance is good enough that displaying it will work to their advantage; those who call for change and want to display information to justify it; those who are newcomers or young and want a chance to demonstrate their value. If your intuition is “you don’t know me, but you’ll like me if you give me a chance” or “you don’t know him, but you’ll be horrified when you find out what he did”, or “if you gave me a chance to explain, you’d agree”, or “if you just let me compete, I bet I could win”, then you want more optimization. If your intuition is “I can’t explain, you wouldn’t understand” or “if you knew what I was really like, you’d see what an impostor I am”, or “malicious people will just use this information to take advantage of me and interpret everything in the worst possible light” or “I’m not for public consumption, I am my own sovereign person, I don’t owe everyone an explanation or justification for actions I have a right to do”, then you’ll want less optimization. Of course, these aren’t so much static “personality traits” of a person as one’s assessment of the situation around oneself. The latter cluster is an assumption that you’re living in a social environment where there’s very little concordance of interests — people knowing more about you will allow them to more effectively harm you. The former cluster is an assumption that you’re living in an environment where there’s a great deal of concordance of interests — people knowing more about you will allow them to more effectively help you. For instance, being “predictable” is, in Venkat’s writing, usually a bad thing, because it means you can be exploited by adversaries. Free people are “inscrutable.” In other contexts, such as parenting, being predictable is a good thing, because you want your kids to have an easier time learning how to “work” the house rules. You and your kid are not, most of the time, wily adversaries outwitting each other; conflicts are more likely to come from too much confusion or inconsistently enforced boundaries. Relationship advice and management advice usually recommends making yourself easier for your partners and employees to understand, never more inscrutable. (Sales advice, however, and occasionally advice for keeping romance alive in a marriage, sometimes recommends cultivating an aura of mystery, perhaps because it’s more adversarial.) A related notion: wanting to join discussions is a sign of expecting a more cooperative world, while trying to keep people from joining your (private or illegible) communications is a sign of expecting a more adversarial world. As social organizations “mature” and become larger, it becomes harder to enforce universal and impartial rules, harder to keep the larger population aligned on similar goals, and harder to comprehend the more complex phenomena in this larger group. . This means that there’s both motivation and opportunity to carve out “hidden” and “special” zones where arbitrary behavior can persist even when it would otherwise come with negative consequences. New or small organizations, by contrast, must gain/create resources or die, so they have more motivation to “optimize” for resource production; and they’re simple, small, and/or homogeneous enough that legible optimization rules and goals and transparent communication are practical and widely embraced. “Security” is not available to begin with, so people mostly seek opportunity instead. This theory explains, for instance, why US public policy is more fragmented, discretionary, and special-case-y, and less efficient and technocratic, than it is in other developed countries: the US is more racially diverse, which means, in a world where racism exists, that US civil institutions have evolved to allow ample opportunities to “play favorites” (giving special legal privileges to those with clout) in full generality, because a large population has historically been highly motivated to “play favorites” on the basis of race. Homogeneity makes a polity behave more like a “smaller” one, while diversity makes a polity behave more like a “larger” one. Aesthetically, I think of optimization as corresponding to an “early” style, like Doric columns, or like Masaccio; simple, martial, all form and principle. Arbitrariness corresponds to a “late” style, like Corinthian columns or like Rubens: elaborate, sensual, full of details and personality. The basic argument for optimization over arbitrariness is that it creates growth and value while arbitrariness creates stagnation. Arbitrariness can’t really argue for itself as well, because communication itself is on the other side. Arbitrariness always looks illogical and inconsistent. It kind of is illogical and inconsistent. All it can say is “I’m going to defend my right to be wrong, because I don’t trust the world to understand me when I have a counterintuitive or hard-to-express or controversial reason for my choice. I don’t think I can get what I want by asking for it or explaining my reasons or playing ‘fair’.” And from the outside, you can’t always tell the difference between someone who thinks (perhaps correctly!) that the game is really rigged against them a profound level, and somebody who just wants to cheat or who isn’t thinking coherently. Sufficiently advanced cynicism is indistinguishable from malice and stupidity. For a fairly sympathetic example, you see something like Darkness at Noon, where the protagonist thinks, “Logic inexorably points to Stalinism; but Stalinism is awful! Therefore, let me insist on some space free from the depredations of logic, some space where justice can be tempered by mercy and reason by emotion.” From the distance of many years, it’s easy to say that’s silly, that of course there are reasons not to support Stalin’s purges, that it’s totally unnecessary to reject logic and justice in order to object to killing innocents. But from inside the system, if all the arguments you know how to formulate are Stalinist, if all the “shoulds” and “oughts” around you are Stalinist, perhaps all you can articulate at first is “I know all this is right, of course, but I don’t like it.” Not everything people call reason, logic, justice, or optimization, is in fact reasonable, logical, just, or optimal; so, a person needs some defenses against those claims of superiority. In particular, defenses that can shelter them even when they don’t know what’s wrong with the claims. And that’s the closest thing we get to an argument in favor of arbitrariness. It’s actually not a bad point, in many contexts. The counterargument usually has to boil down to hope — to a sense of “I bet we can do better.” Discuss Triangle SSC Meetup-April 2 апреля, 2019 - 21:42 Published on April 2, 2019 6:42 PM UTC Interested in rationality in the Research Triangle? Come join us at Ponysaurus. We're a fun, welcoming and engaging group! Discuss March 2019 gwern.net newsletter 2 апреля, 2019 - 17:17 Published on April 2, 2019 2:17 PM UTC Discuss Internet v. Culture (2019) - Los Angeles LW/SSC Meetup #103 (Wednesday, April 3rd) 2 апреля, 2019 - 09:00 Published on April 2, 2019 6:00 AM UTC Location: Wine Bar next to the Landmark Theater in the Westside Pavilion (10850 W Pico Blvd #312, Los Angeles, CA 90064). We will move upstairs (to the 3rd floor hallway) as soon as we reach capacity. Time: 7 pm (April 3rd) Parking: Available in the parking lot for the entire complex. The first three (3) hours are free and do not require validation (the website is unclear and poorly written, but it may be the case that if you validate your ticket and leave before three hours have passed, you will be charged3). After that, parking is $3 for up to the fifth (5) hour, with validation. Contact: The best way to contact me (or anybody else who is attending the meetup) is through our Discord. Feel free to message me (T3t) directly. Invitation link: https://discord.gg/TaYjsvN Topic: We'll be discussing the effects (and second-order effects) of the internet on culture. Reading: https://marginalrevolution.com/marginalrevolution/2019/04/the-internet-vs-culture.html Discuss User GPT2 is Banned 2 апреля, 2019 - 09:00 Published on April 2, 2019 6:00 AM UTC For the past day or so, user GPT2 has been our most prolific commenter, replying to (almost) every LessWrong comment without any outside assistance. Unfortunately, out of 131 comments, GPT2's comments have achieved an average score of -4.4, and have not improved since it received a moderator warning. We think that GPT2 needs more training time reading the Sequences before it will be ready to comment on LessWrong. User GPT2 is banned for 355 days, and may not post again until April 1, 2020. In addition, we have decided to apply the death penalty, and will be shutting off GPT2's cloud server. Use this thread for discussion about GPT2, on LessWrong and in general. Discuss post-rational distractions 2 апреля, 2019 - 05:26 Published on April 2, 2019 2:26 AM UTC DonyChristie's intellectual fap post has called for post-rational techniques. I got most of the way through a comment reply before I realised it was a joke. April fools and all. Fruits of that effort here are some thoughts *** Developing your centre's. Sarah Perry's are knitting and mountain running. https://www.ribbonfarm.com/2018/04/06/deep-laziness/ If you ever meet me in person and want to put me at ease, ask me about running or knitting. These are two of my behaviours, my behavioural centers, and one indication of that is how much I like talking about them specifically. I do feel that there is something special about them, and that they connect to my nature on a fundamental level. In my heart, I think everyone should do mountain running and knitting, because they are the best things. Reading a lot. All the good soft books. Perhaps the ones overlooked by the skeptic types: Bonds that make us free, Feeding your Demons, Chakras, MTG colour wheel, Dream interpretation, Peterson's Bible lectures. Architecture. Free-ing stuck meanings. A long example of Chapman's here. "I'm not good with people" or "I'm not a technical person" Meditation. Seems to be important and relate to this somehow. MTCB, The Mind Illuminated, Seeing that frees, Roaring Silence. What's the context? What the hell is it you're trying to do? The metagame is discovering the constraints. You're swimming in the unknown what are the rules of the game you're playing. This is what you're doing anyway. It feels important keep in mind Chapman's answer to "If not Bayes then what?" My answer to “If not Bayesianism, then what?” is: all of human intellectual effort. Figuring out how things work, what’s true or false, what’s effective or useless, is “human complete.” In other words, it’s unboundedly difficult, and every human intellectual faculty must be brought to bear. Discuss Announcing the Center for Applied Postrationality 2 апреля, 2019 - 04:17 Published on April 2, 2019 1:17 AM UTC Hi all! Today, we are announcing the formation of a new organization: the Center for Applied Postrationality (or CFAP). Right now we're looking for two things: 1)$1.5 million in funding to have runway for the next six months, and 2) intellectual contributions from deep thinkers like YOU!
Just what is postrationality, anyway? To be honest, we don't really know either. Maybe you can help us? The term can refer to many different things, including:
• Epistemological pragmatism
• Getting in touch with your feelings
• Learning social skills
• Disagreeing with stuff on LessWrong
• Becoming a Christian
• According to one of our employees: "Postrationality is making the territory fit the map. Postrationality is realizing that paraconsistent linear homotopy type theory is the One True framework for epistemology, not Bayes. Postrationality is realizing that Aristotle was right about everything so there's no point in doing philosophy anymore. Or natural science. Postrationality is The Way that realizes there is no Way. Postrationality is meaning wireheading rather than pleasure wireheading. Postrationality is postironic belief in God. Postrationality is the realization that there is no distinction between sincerity and postirony."
• Another employee: "CFAP exists at the intersection of epistemology, phenomenology, sense-making, and pretentiousness. Our goal is to make new maps, in order to better navigate the territory, especially the territory in California. Google maps sucks at this. "
We're still deconfusing ourselves on what "applied" postrationality is, as so far it's mostly been insight porn posted on Twitter. Comment below what techniques you'd suggest for training the art of postrationality!
Discuss
User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines
1 апреля, 2019 - 23:23
Published on April 1, 2019 8:23 PM UTC
We take commenting quality seriously on LessWrong, especially on Frontpage posts. In particular, we think that this comment by user GPT2 fails to live up to our Frontpage commenting guidelines:
This is a pretty terrible post; it belongs in Discussion (which is better than Main and just as worthy of asking the question), and no one else is going out and read it. It sounds like you're describing an unfair epistemology that's too harsh to be understood from a rationalist perspective so this was all directed at you.
Since user GPT2 seems to be quite prolific, we have implemented a setting to hide comments by GPT2, which can be accessed from the settings page when you are logged in.
Discuss
Prompts for eliciting blind spots/bucket errors/bugs
1 апреля, 2019 - 22:41
Published on April 1, 2019 7:41 PM UTC
This post is to make publicly available a few prompts/questions I came up with aiming to uncover blind spots around identity/self-concepts.
• Select a trait X that you believe you have, and where you like that you have it (e.g. rational, kind, patient...)
• Try to imagine a character that is a caricature of someone with trait X. Or another way to think about this: The way Spock is a Straw Man version of a rational character, what would a Straw Man version of a character with trait X look like? (referred to in the following as X-Spock)
• What are blind spots an X-Spock is likely to have?
• In what sorts of situations is an X-Spock especially likely to fail?
• What would an X-Spock have a lot of trouble admitting to? (e.g. someone who considers themselves courageous may be unable to admit they are afraid)
• What are traits that seem like opposites of X?
• Could the opposite traits actually be beneficial?
• Is what seems like an opposite trait in actuality orthogonal? (e.g. rational and emotional)
Discuss
Learning "known" information when the information is not actually known
1 апреля, 2019 - 20:56
Published on April 1, 2019 5:56 PM UTC
Methods like cooperative inverse reinforcement learning assume that the human knows their "true" reward function R(θ), and then that the human and the robot cooperate to figure out and maximise this reward.
This is fine as far as the model goes, and can allow us to design many useful systems. But it has a problem: the assumption is not true, and, moreover, its falsity can have major detrimental effects.
Contrast two situations:
1. The human knows the true R(θ).
2. The human has a collection of partial models in which they have clearly defined preferences. As a bounded, limited agent whose internal symbols are only well-grounded in standard situations, their stated preferences will be a simplification of their mental model at the time. The true R(θ) is constructed from some process of synthesis.
Now imagine the following conversation:
• AI: What do you really want?
• Human: Money.
• AI: Are you sure?
• Human: Yes.
Under most versions of hypothesis 1., this will be in a disaster. The human has expressed their preferences, and, when offered the opportunity for clarification, didn't give any. The AI will become a money-maximiser, and things go pear shaped.
Under hypothesis 2., however, the AI will attempt to get more details out of the human, suggesting hypothetical scenarios, checking what happens when money and other things in money's web of connotations come apart - eg "What if you had a lot of money, but couldn't buy anything, and everyone despised you?" The synthesis may fail, but, at the very least, the AI will investigate more.
Thus assuming the AI will be learning a truth that humans already know, is harmless assumption in many circumstances, but will result in disasters if pushed to the extreme.
Discuss
|
Label Wt Deg $\mathrm{dim}_{\mathbb{R}}$ $\mathrm{G}^0$ Name $\mathrm{G}/\mathrm{G}^0$ $\#\mathrm{G}/\mathrm{G^0}$ $\mathrm{Pr}[t\!=\!0]$ Trace moments
1.2.A.1.1a $1$ $2$ $3$ $\mathrm{SU}(2)$ $\mathrm{SU}(2)$ $C_1$ $1$ $0$ $1$ $0$ $1$ $0$ $2$ $0$ $5$ $0$ $14$
1.2.B.1.1a $1$ $2$ $1$ $\mathrm{U}(1)$ $\mathrm{U}(1)$ $C_1$ $1$ $0$ $1$ $0$ $2$ $0$ $6$ $0$ $20$ $0$ $70$
1.2.B.2.1a $1$ $2$ $1$ $\mathrm{U}(1)$ $N(\mathrm{U}(1))$ $C_2$ $2$ $1/2$ $1$ $0$ $1$ $0$ $3$ $0$ $10$ $0$ $35$
|
# Why do we multiply $\cos θ$ in the formula for work?
I know that the formula for work, $$W = FS\cos\theta$$, where $$F$$ is the applied force, $$S$$ is the displacement of the object and $$\theta$$ is the angle between the applied force and the displacement of the object. When the object moves along the direction of the force, then we use $$W = FS$$. But why do we multiply it with $$\cos\theta$$ when the object doesn't move along the direction of the force?
• \begin{aligned}W=\int \overrightarrow{F}\cdot \overrightarrow{ds}=\int \left( \overrightarrow{F}_{11}+\overrightarrow{F}_{\bot }\right) \cdot d\overrightarrow{s}= \int \overrightarrow{F}_{11}\cdot \overrightarrow{ds}=\int F\cos \left( \varphi \right) ds\end{aligned} – Eli Jun 9 at 18:56
In general, work is a dot product between two vectors. You must integate over the path, $$W=\int \vec F\cdot \mathrm d\vec s$$, but if the force is constant, this simplifies to:
$$W=\vec F\cdot \vec s.$$
A dot product is mathematically the parallel components multiplied together. We could write it as
$$W=\vec F\cdot \vec s \quad\Leftrightarrow\quad W=F_\parallel \;s\quad\Leftrightarrow\quad W=F\;s_\parallel$$
if we wanted to. These three versions are all equivalent. The trick we need is how to get the parallel component only. If the forces are already parallel, then no biggie. Just multiply them without change:
$$W=F\;s\qquad \leftarrow \text{if parallel.}$$
If not, then it turns out that the cosine function can help. Remember from the unit-circle definition how cosine is defined: It is the horizontal distance while sine is the vertical distance. The cosine and sine values constistute the two legs (cathetusses) of a right-angled triangle in which the hypotenuse is 1. Scale it up to a ypotenuse with the length of your vector $$F$$ and the legs are now $$F$$ times longer than before, so $$\cos(\theta)F$$ and $$\sin(\theta)F$$.
These legs are the horizontal and vertical components, respectively, of the vector $$F$$, meaning:
$$F_\parallel=\cos(\theta)F\qquad\text{ and }\qquad F_\perp=\sin(\theta)F.$$
We could do the same for the vector $$s$$. In both cases, when we replace either $$F_\parallel$$ or $$s_\parallel$$ in work formula, we introduce this cosine term:
$$W=\vec F\cdot \vec s \quad\Leftrightarrow\quad W=F_\parallel \;s\quad\Leftrightarrow\quad W=F\;s_\parallel \quad\Leftrightarrow\quad W=Fs\cos(\theta).$$
The dot product uses cosine in this way. The cross product uses sine (used in the torque formula e.g.). To remember this technique for other uses, try to memorize that the cosine is equal to the adjacent leg of the right-angled triangle over the hypotenuse. And sine is equal to the opposite leg over the hypotenuse:
$$\cos(\theta)=\frac{\text{adjacent leg}}{\text{hypotenuse}}\qquad \text{ and }\qquad \sin(\theta)=\frac{\text{opposite leg}}{\text{hypotenuse}}.$$
(I don't know if there is some smart mnemonic in English - in Danish I typically say "cos er hos", because "hos" means "at / belongs to".) Since the hypotenuse always is the full vector length, then just flip it over to the over side of the equal sign, and then you have the expression you need for the parallel component (adjacent leg) or perpendicular component (opposite leg).
Because the force applied on the object produces two effects - changing kinetic energy of the object by changing its speed and curving its path.
The work is supposed to tell us, how much kinetic energy was transferred to the object (or from it). The kinetic energy is however increased only by the component of the force in the direction of the motion, which is why the cosine appears. The perpendicular component serves only to curve the trajectory and there is no energy associated with the curvature of the motion so we are not interested in this component.
• THIS is the key, nice answer. I suggest you to include a circular motion example. In circular motion, parallel force increases kinetic energy while the normal component only maintains the rotation. That is to say, components parallel to motion increase velocity, while normal components only change trajectoy but not the modulus of the velocity – FGSUZ Jun 9 at 15:45
Work done is a scalar quantity and it is defined by the product of two vectors Force and displacement. Now dot product of two vectors give a scalar quantity. So work done must be a dot product of the force and displacement vectors. That's why $$W=FS\cos(\theta)$$
• $|\vec{F}|\cdot|\vec{S}|$ is also a scalar quantity. The presence of the coseine has a deeper meaning than just making it a scalar – FGSUZ Jun 9 at 15:42
To give and intuitive idea:
Imagine a person carrying a heavy suitcase fitted with wheels.
As long as the person pulls his luggage on a flat terrain, he does (ideally) no work: The (gravitational) force is perpendicular to the motion.
If the person arrives at a place when he/she has to pull his luggage say along an uphill ramp, then the more inclined the ramp, the more works said person will have to provide to go forward, because the larger the angle, the more the motion of the suitcase is aligned withe the force of gravity.
In this example the $$cos(\alpha)$$ measure how much the motion of the the suitcase is aligned withe gravity (i.e) with the vertical axis.
$$W=\mathbf{F} \cdot \mathbf{S}$$ (work done is the dot product between the force applied and the displacement of the system)
There is another concept which is torque $$\boldsymbol{\tau} = \mathbf{F} \times \mathbf{S}$$ (torque is the cross product or vector product of force applied and displacement of the system)
Dot product between the two vectors (both force and displacement are vector quantities here) is $$\mathbf{F} \cdot \mathbf{S} = FS \cos\theta$$ ($$\theta$$ is the angle between the two vectors). In conclusion, we measure how much force is applied in the direction of displacement or how much displacement happened in the direction of force.
• Welcome to Physics SE. First of all, try to lean MathJax to make your posts easier to read math.meta.stackexchange.com/questions/5020/…. Second, work in mechanics is measure of how much energy is transformed to/from kinetic energy. Measure of how much force is applied in the direction of displacement is tangential component of the force, not work. And third, I do not see question, nor even any mention of "work being done in some angle" so your answer seems to me mostly irrelevant. – Umaxo Jun 9 at 12:08
|
## VPI - Vision Programming Interface
#### 1.1 Release
Harris Corner Detector
# Overview
This algorithm implements the Harris keypoint detection operator that is commonly used to detect keypoints and infer features of an image.
The standard Harris detector algorithm as described in [1] is applied first. After that, a non-max suppression pruning process is applied to the result to remove multiple or spurious keypoints.
Input Parameters Output keypoints
\begin{align*} \mathit{gradientSize} &= 5 \\ \mathit{blockSize} &= 5 \\ \mathit{strengthThresh} &= 20 \\ \mathit{sensitivity} &= 0.01 \\ \mathit{minNMSDistance} &= 8 \end{align*}
# Implementation
1. Compute the spatial gradient of the input using one of the following filters, depending on the value of VPIHarrisCornerDetectorParams::gradientSize :
\begin{align*} \mathit{sobel}_x &= \frac{1}{4} \cdot \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} \cdot \begin{bmatrix} -1 & 0 & 1 \end{bmatrix} \\ \mathit{sobel}_y &= (\mathit{sobel}_x)^\intercal \end{align*}
\begin{align*} \mathit{sobel}_x &= \frac{1}{16} \cdot \begin{bmatrix} 1 \\ 4 \\ 6 \\ 4 \\ 1 \end{bmatrix} \cdot \begin{bmatrix} -1 & -2 & 0 & 2 & 1 \end{bmatrix} \\ \mathit{sobel}_y &= (\mathit{sobel}_x)^\intercal \end{align*}
\begin{align*} \mathit{sobel}_x &= \frac{1}{64} \cdot \begin{bmatrix} 1 \\ 6 \\ 15 \\ 20 \\ 15 \\ 6 \\ 1 \end{bmatrix} \cdot \begin{bmatrix} -1 & -4 & -5 & 0 & 5 & 4 & 1 \end{bmatrix} \\ \mathit{sobel}_y &= (\mathit{sobel}_x)^\intercal \end{align*}
2. Compute a gradient covariance matrix (structure tensor) for each pixel within a block window, as described by:
$M = \sum_{p \in B}\begin{bmatrix}I_x^2(p) & I_x(p) I_y(p) \\ I_x(p) I_y(p) & I_y^2(p) \end{bmatrix}$
where:
• p is a pixel coordinate within B, a block window of size 3x3, 5x5 or 7x7.
• $$I(p)$$ is the input image
• $$I_x(p) = I(p) * \mathit{sobel}_x$$
• $$I_y(p) = I(p) * \mathit{sobel}_y$$
3. Compute a Harris response score using a sensitivity factor
$R = \mathit{det}(M) - k \cdot \mathit{trace}^2(M )$
where k is the sensitivity factor
4. Applies a threshold-strength criterion, pruning keypoints whose response <= VPIHarrisCornerDetectorParams::strengthThresh.
5. Applies a non-max suppression pruning process.
This process splits the input image into a 2D cell grid. It selects a single corner with the highest response score inside the cell. If several corners within the cell have the same response score, it selects the bottom-right corner.
# Usage
Language:
1. Import VPI module
import vpi
2. Execute the algorithm on the input image using the CUDA backend. Two VPI arrays are returned, one with keypoitn positions, and another with the scores. The keypoints array has type vpi.Type.KEYPOINT and scores array has type vpi.Type.U32.
with vpi.Backend.CUDA:
keypoints, scores = input.harriscorners(sensitivity=0.01)
1. Initialization phase
1. Include the header that defines the box filter function.
Declares functions that implement the Harris Corner Detector algorithm.
2. Define the input image object.
VPIImage input = /*...*/;
struct VPIImageImpl * VPIImage
A handle to an image.
Definition: Types.h:215
3. Create the output arrays that will store the keypoints and their scores.
VPIArray keypoints;
vpiArrayCreate(8192, VPI_ARRAY_TYPE_KEYPOINT, 0, &keypoints);
VPIArray scores;
vpiArrayCreate(8192, VPI_ARRAY_TYPE_U32, 0, &scores);
VPIStatus vpiArrayCreate(int32_t capacity, VPIArrayType type, uint32_t flags, VPIArray *array)
Create an empty array instance.
struct VPIArrayImpl * VPIArray
A handle to an array.
Definition: Types.h:191
@ VPI_ARRAY_TYPE_U32
unsigned 32-bit.
Definition: ArrayType.h:75
@ VPI_ARRAY_TYPE_KEYPOINT
VPIKeypoint element.
Definition: ArrayType.h:76
4. Since this algorithm needs temporary memory buffers, create the payload for it on the CUDA backend.
int32_t w, h;
vpiImageGetSize(input, &w, &h);
Creates a Harris Corner Detector payload.
VPIStatus vpiImageGetSize(VPIImage img, int32_t *width, int32_t *height)
Get the image size in pixels.
A handle to an algorithm payload.
Definition: Types.h:227
@ VPI_BACKEND_CUDA
CUDA backend.
Definition: Types.h:93
5. Create the stream where the algorithm will be submitted for execution.
VPIStream stream;
vpiStreamCreate(0, &stream);
struct VPIStreamImpl * VPIStream
A handle to a stream.
Definition: Types.h:209
VPIStatus vpiStreamCreate(uint32_t flags, VPIStream *stream)
Create a stream instance.
2. Processing phase
1. Initialize the configuration structure with default parameters and set sensitivity to a new value.
params.sensitivity = 0.01;
float sensitivity
Specifies sensitivity threshold from the Harris-Stephens equation.
Definition: HarrisCorners.h:91
VPIStatus vpiInitHarrisCornerDetectorParams(VPIHarrisCornerDetectorParams *params)
Initializes VPIHarrisCornerDetectorParams with default values.
Structure that defines the parameters for vpiSubmitHarrisCornerDetector.
Definition: HarrisCorners.h:80
2. Submit the algorithm and its parameters to the stream. It'll be executed by the CUDA backend associated with the payload.
vpiSubmitHarrisCornerDetector(stream, 0, harris, input, keypoints, scores, ¶ms);
VPIStatus vpiSubmitHarrisCornerDetector(VPIStream stream, uint32_t backend, VPIPayload payload, VPIImage input, VPIArray outFeatures, VPIArray outScores, const VPIHarrisCornerDetectorParams *params)
Submits Harris Corner Detector operation to the stream associated with the payload.
3. Optionally, wait until the processing is done.
vpiStreamSync(stream);
VPIStatus vpiStreamSync(VPIStream stream)
Blocks the calling thread until all submitted commands in this stream queue are done (queue is empty)...
3. Cleanup phase
1. Free resources held by the stream, the payload, the input image and the output arrays.
vpiArrayDestroy(keypoints);
vpiArrayDestroy(scores);
void vpiArrayDestroy(VPIArray array)
Destroy an array instance.
void vpiImageDestroy(VPIImage img)
Destroy an image instance.
Deallocates the payload object and all associated resources.
void vpiStreamDestroy(VPIStream stream)
Destroy a stream instance and deallocate all HW resources.
For more information, see Harris Corners in the "API Reference" section of VPI - Vision Programming Interface.
# Limitations and Constraints
Constraints for specific backends supersede the ones specified for all backends.
## All Backends
• Input image must have same dimensions as the ones specified during payload creation.
• Only supports Sobel gradient kernels of sizes 3x3, 5x5 and 7x7.
• Output scores and keypoints arrays must have the same capacity.
• Must satisfy $$\mathit{minNMSDistance} \geq 1$$.
• The following image types are accepted:
• On 16-bit inputs, the pixel values must be restricted to 12-bit, or else overflows in score calculation will occur. In this case some keypoints might be invalid.
## PVA
• Only available on Jetson Xavier devices.
• Only supports VPI_IMAGE_FORMAT_S16.
• Output keypoints and scores array capacity must be 8192.
• Only accepts $$\mathit{minNMSDistance} = 8$$.
• Image dimensions limited to minimum of 160x120, maximum of 3264x2448.
## VIC
• Not implemented.
# Performance
For information on how to use the performance table below, see Algorithm Performance Tables.
Before comparing measurements, consult Comparing Algorithm Elapsed Times.
For further information on how performance was benchmarked, see Performance Benchmark.
-
# References
1. C. Harris, M. Stephens (1988), "A Combined Corner and Edge Detector"
Proceedings of Alvey Vision Conference, pp. 147-151.
|
What is the basic method for proving the irrationality of a number?
• July 20th 2010, 06:47 AM
mfetch22
What is the basic method for proving the irrationality of a number?
I know to prove the irrationality of $\sqrt{2}$ we simply minipulate the following:
$(\frac{p}{q})^2 = 2$
until we hit the contradiction of not being in reduced terms. I also know that we must take into account that all number can be written in the form $2n$ or $2n+1$ for $n=1,2,3...$; and the fact that if $x^2$ is an even number then $x$ is an even number. Heres my question, how do you prove the irrationality of a certain opertation on numbers in general? And for odd square roots?
Two examples of what I mean:
[1] How would you prove that if $n=1, 2, 3...$ that $\sqrt{n}$ is always irrational?
[2] How would you prove $\sqrt{3}$ is irrational? Would you find a way to define numbers in terms of 3's ? Just like with the $\sqrt{2}$? So all numbers would be of the form $3n$, $3n+1$, and $3n+2$; and would you some how use that fact to prove the irrationality of $\sqrt{3}$?
• July 20th 2010, 06:51 AM
Ackbeet
[1] You wouldn't. It's not true: the square root of 4 is rational.
[2] I think this proof follows pretty much the same lines as for the square root of 2.
• July 20th 2010, 07:03 AM
mfetch22
Opps. That was a big mistake. For [1] I meant this:
[1] How would one prove that $\sqrt{x}$ is irrational for all $x = 1, 2, 3...$ such that $x \neq n^2$ for any natural number $n$ such that $n = 1, 2, 3...$
My apologies, that was a very idiotic error in clearity. Hopefully that makes sense now.
• July 20th 2010, 07:05 AM
Ackbeet
I think generally that that proof would go along the same lines as for the square root of 2. Try it and see what happens.
• July 20th 2010, 07:17 AM
Quote:
Originally Posted by mfetch22
I know to prove the irrationality of $\sqrt{2}$ we simply minipulate the following:
$(\frac{p}{q})^2 = 2$
until we hit the contradiction of not being in reduced terms. I also know that we must take into account that all number can be written in the form $2n$ or $2n+1$ for $n=1,2,3...$; and the fact that if $x^2$ is an even number then $x$ is an even number. Heres my question, how do you prove the irrationality of a certain opertation on numbers in general? And for odd square roots?
Two examples of what I mean:
[1] How would you prove that if $n=1, 2, 3...$ that $\sqrt{n}$ is always irrational?
[2] How would you prove $\sqrt{3}$ is irrational? Would you find a way to define numbers in terms of 3's ? Just like with the $\sqrt{2}$? So all numbers would be of the form $3n$, $3n+1$, and $3n+2$; and would you some how use that fact to prove the irrationality of $\sqrt{3}$?
For (2), it's the same way how you would prove the irrationality of root(2).
Assuming that $\sqrt{3}$ is rational, then it takes the form $\sqrt{3}=\frac{m}{n}$, where $m,n\in Z$ and m and n do not have common factors.
$3=\frac{m^2}{n^2}\Rightarrow n^2=\frac{m}{3}\cdot m$
Since $n^2$ is an integer, it follows that m/3 is an integer too. Thus, 3 is a factor of m.
Let $m=3p$ where $p\in Z$ , $n^2=3p^2$
$p^2=\frac{n}{3}\cdot n$
$n/3 \in Z$
We see that both m and n have a common factor of 3 which contradicts our assumption.
• July 20th 2010, 07:28 AM
Also sprach Zarathustra
It is a nice exercise to prove following:
Every square root of a prime number is irrational.
• July 20th 2010, 07:36 AM
mfetch22
Okay, your right, I think I've figured out the proof for all even numbers, but I'm alittle lost for the odd numbers. Here is what I have, please point out any mistakes I made.
-> We want to prove that $\sqrt{n}$ is irrational, as long as no numbers $x = 1, 2, 3....$ satisfy $x^2 = n$.
[i] If $\sqrt{n}$ is rational, then it can be written in the following form:
$(\frac{a}{b}) = \sqrt{n}$ thus $(\frac{a}{b})^2 = n$
[ii] This leads to the following equation:
$a^2 = nb^2$
$a^2$ is either even or odd. Lets start with even.
[Case 1]
Say $a^2$ is even, then for some $p = 1, 2, 3...$ we have
$a^2 = 2p = nb^2$
Thus, if $a^2$ is even, [of the form $2p$]
then $a$ is even, and of the form $2k$
for some natural number $k = 1, 2, 3....$.
Thus, we have $a = 2k$ and :
$a^2 = 4k^2 = 2p = nb^2$
This proves $nb^2$ to be even.
If $n$ is even, then we'd arrive at the form (for some $t=1,2,3...$)
$b^2 = 2t$
thus, $b = 2g$
for some $g =1,2,3....$
Thus $a$ and $b$ have a common divisor, and this is a contradiction.
Is this correct? For this one case where $a = 2k$ for some $k = 1, 2, 3...$ and $n = 2g$ for some $g=1, 2, 3...$? I don't know exactly how to go about proving it with the odd numbers, I tried on pen and paper using the form $2k+1$ for $k = 1, 2, 3...$ but the "common factor" never seemed to show up. Can somebody give me a hint or show me the right direction?
• July 20th 2010, 07:40 AM
melese
Quote:
[1] How would you prove that if that is always irrational?
When you use the Unique Factorization Theorem, you get a simple proof.
Let $n$ be a positive integer. We can show that if $\sqrt{n}$ is rational, then $n$ must be a perfect square. From here it follows that if $n$ is not a perfect square, then $\sqrt{n}$ is irrational.
Suppose that $\sqrt{n}=a/b$ for $a, b$ positive. Then $a^2n=b^2$. For any prime $q$, let $q^A, q^N,$ and $q^B$ be the highest powers of $q$ that divide $a, n,$ and $b$, respectively.
Now, because $a^2n=b^2$ and using the Unique Factorication Theorem, we must have $2A+N=2B$ and then $2|N$. This means that $n$ is a perfect square.
This way you can even prove a more general result: For any positive integers $n, m$; if $\sqrt[m]{n}$ is rational, then $n$ must be a perfect $m$th power.
• July 21st 2010, 05:57 AM
melese
An alternative solution.
There is a nice proof I saw once. It can be generalized to any positive integer that is not a perfect sqaure.
Soppose that $\sqrt{2}=a/b$, where $a, b$ are postive integers that are relatively prime. Then $\sqrt{2}=\sqrt{2}\cdot1=\sqrt{2}\cdot(ar+bs)$ for some integers $r, s$. (Bézout's Identity)
Because $\sqrt{2}=a/b$, we have $a=\sqrt{2}b$ and $b=a/\sqrt{2}$. Then $\sqrt{2}=\sqrt{2}ar+\sqrt{2}bs=2br+as$, and $\sqrt{2}ar+\sqrt{2}bs=2br+as$ is clearly an integer.
This means that $\sqrt{2}$ is an integer, but $1<\sqrt{2}<2$; contradiction.
|
# A NOTE ON THE WEIGHTED TWISTED DIRICHLET'S TYPE q-EULER NUMBERS AND POLYNOMIALS
• Araci, Serkan ;
• Aslan, Nurgul ;
• Se, Jong-Jin
• Accepted : 2011.06.07
• Published : 2011.09.25
• 37 13
#### Abstract
We in this paper construct Dirichlet's type twisted q-Euler numbers and polynomials with weight ${\alpha}$. We give some interestin identities some relations.
#### Keywords
Euler numbers and polynomials;q-Euler numbers and polynomials;Twisted q-Euler numbers and polynomials with weight ${\alpha}$;Dirihlet' type twisted q-Euler numbers and polynomials with weight ${\alpha}$
#### References
1. Araci, S. Erdal, D. and Seo, J.J., A Study on the The Weighted q-Genocchi Numbers and Polynomials Their Interpolation function, (Submitted)
2. Araci, S. Seo, J.J. and Erdal, D., Different Approach On The (h; q) Genocchi Numbers and Polynomials Associated with q-Bernstein Polynomials, (Submitted)
3. Kim, T., A New Approach to q-Zeta Function, Adv. Stud. Contemp. Math. 11 (2) 157-162.
4. Araci, S. Seo, J.J. and Erdal, D., New Construction weighted (h; q)-Genocchi numbers and Polynomials Related to Zeta Type Functions, Discrete Dynamics in Nature and Society(in press)
5. Kim, T., On the q-extension of Euler and Genocchi numbers, J. Math. Anal. Appl. 326 (2007) 1458-1465. https://doi.org/10.1016/j.jmaa.2006.03.037
6. Kim, T., On the multiple q-Genocchi and Euler numbers, Russian J. Math. Phys. 15 (4) (2008) 481-486. arXiv:0801.0978v1 [math.NT] https://doi.org/10.1134/S1061920808040055
7. Kim, T., On the weighted q-Bernoulli numbers and polynomials, Advanced Studies in Contemporary Mathematics 21(2011), no.2, p. 207-215, http://arxiv.org/abs/1011.5305.
8. Kim, T., A Note on the q-Genocchi Numbers and Polynomials, Journal of Inequalities and Applications 2007 (2007) doi:10.1155/2007/71452. Article ID 71452, 8 pages. https://doi.org/10.1155/2007/71452
9. Kim, T., q-Volkenborn integration, Russ. J. Math. phys. 9(2002) ; 288-299.
10. Kim, T., An invariant p-adic q-integrals on Zp, Applied Mathematics Letters, vol. 21, pp. 105-108, 2008. https://doi.org/10.1016/j.aml.2006.11.011
11. Kim, T., q-Euler numbers and polynomials associated with p-adic q-integrals, J. Nonlinear Math. Phys., 14 (2007), no. 1, 15-27. https://doi.org/10.2991/jnmp.2007.14.1.3
12. Kim, T., New approach to q-Euler polynomials of higher order, Russ. J. Math. Phys., 17 (2010), no. 2, 218-225. https://doi.org/10.1134/S1061920810020068
13. Kim, T., Some identities on the q-Euler polynomials of higher order and q-Stirling numbers by the fermionic p-adic integral on Zp, Russ. J. Math. Phys., 16 (2009), no.4, 484-491. https://doi.org/10.1134/S1061920809040037
14. Kim, T. and Rim, S.-H., On the twisted q-Euler numbers and polynomials associated with basic q-l-functions, Journal of Mathematical Analysis and Applications, vol. 336, no. 1, pp. 738-744, 2007. https://doi.org/10.1016/j.jmaa.2007.03.035
15. T. Kim, On p-adic q-l-functions and sums of powers, J. Math. Anal. Appl. (2006), doi:10.1016/j.jmaa.2006.07.071 https://doi.org/10.1016/j.jmaa.2006.07.071
16. Park. Kyoung Ho., On Interpolation Functions of the Generalized Twisted (h; q)-Euler Polynomials, Journal of Inequalities and Applications., Volume 2009, Article ID 946569, 17 pages
17. Jang. L.-C., On a q-analogue of the p-adic generalized twisted L-functions and p-adic q-integrals, Journal of the Korean Mathematical Society, vol. 44, no. 1, pp. 1-10, 2007. https://doi.org/10.4134/JKMS.2007.44.1.001
18. Ryoo. C. S., A note on the weighted q-Euler numbers and polynomials, Advan. Stud. Contemp. Math. 21(2011), 47-54.
19. Ryoo. C. S, Lee. H. Y, and Jung. N. S., A note on the twisted q-Euler numbers and polynomials with weight ${\alpha}$, (Communicated).
20. Y. Simsek, Theorems on twisted L-function and twisted Bernoulli numbers, Advan. Stud. Contemp. Math., 11(2005), 205-218.
21. Y. Simsek, Twisted (h; q)-Bernoulli numbers and polynomials related to twisted (h; q)-zeta function and L-function, J. Math. Anal. Appl., 324(2006), 790-804. https://doi.org/10.1016/j.jmaa.2005.12.057
22. Y. Simsek, On p-Adic Twisted q-L-Functions Related to Generalized Twisted Bernoulli Numbers, Russian J. Math. Phys., 13(3)(2006), 340-348. https://doi.org/10.1134/S1061920806030095
23. Dolgy, D-V., Kang, D-J., Kim, T., and Lee, B., Some new identities on the twisted (h; q)-Euler numbers q-Bernstein polynomials, arXiv: 1105.0093.
#### Cited by
1. Some identities of Bernoulli, Euler and Abel polynomials arising from umbral calculus vol.2013, pp.1, 2013, https://doi.org/10.1186/1687-1847-2013-15
|
# Why is vapour pressure not dependent on shape and size of the container?
In my textbook, it's written that vapour pressure of a liquid does not depend on shape and size of a container but won't a container that provides less surface area to the liquid will have lower vapour pressure?
for example, consider 2 containers give in the diagram below, the volume of container and liquid is same in both the diagram just the shape is changed.
wouldn't the vapor pressure be lower in the diagram on the left side? and how does changing size not affect vapor pressure?
• Consider though that "vapor pressure" is really "equilibrium vapor pressure" thus ignoring dynamics. Due to smaller surface area the left system would be slower to come to equilibrium than the right system.
– MaxW
Sep 23 '19 at 16:24
• For a sufficiently tall container, vapor pressure does vary with height, in a gravitational field, so it is dependent on shape. Sep 23 '19 at 17:56
• At equilibrium, both forward and reverse rate increase with increasing surface area, so the equilibrium constant is not affected. Your diagram is lacking blue molecules in the gas phase. Sep 23 '19 at 19:38
• @DrMoishePippik I am afraid your comment is confusing OP. Sep 24 '19 at 8:49
• @DrMoishePippik. Yes and of course. But this remark about tall containers in a gravitational field can be done all the times if the gas isn't perfect. Thus, for basically every question regarding gases. What is important is the P at the condensed ph./ gas interface, or, by definition, vapour P is function of T only. The P atop has nothing to do with vapour P has for that region is not an interface. And again the same thinking applies to P in general. Atm P at ground and at K2 are of course different. But vapour P is the same. It might be a different definition in earth science and meteo... Sep 27 '19 at 10:00
## 2 Answers
Vapor pressure does not depend on surface area because it is derived from the thermodynamic equilibrium between the liquid and gas phases of a substance. The closer the vapor pressure is to atmospheric pressure, the closer that substance is to boiling. At its boiling point, vapor pressure equals atmospheric pressure and the molecules that comprise the liquid now have enough thermal energy to overcome the intermolecular forces that are maintaining it in a condensed phase. So it might help to think about how container geometry does not affect the boiling point of a liquid; the same reasons are true for why it does not affect the vapor pressure.
The evaporation rate in terms of mass/time is absolutely a function of the container geometry, but equilibrium vapor pressure is an intrinsic molecular property.
Evaporation of a liquid (and condensation of vapour) are physical processes. We can write the equations for a liquid material $$\text{M}$$ as follows:
$$\ce{M_{(l)}} \rightleftharpoons \ce{M_{(g)}}$$
By the law of mass action, we can write the expression for the equilibrium constant as follows:
$$K = \dfrac{[\ce{M_{(g)}}]}{[\ce{M_{(l)}]}}$$
We typically use partial pressures for the activity of gases and unity for the activity of liquids. Thus,
$$K_{p} = \dfrac{p_{M}}{1} = p_{M}$$
Suppose we start with the liquid $$\ce{M}$$ in a container with an arbitrary amount of headspace. It will evaporate till the vapour exerts a pressure equivalent to $$p_{M}$$ (of course, $$p_{M}$$ depends on the temperature).
Thus, with a change in the size of a container, there will be a difference in the amount of $$\ce{M}$$ in the vapour phase, but not the partial pressure exerted by the vapours of $$\ce{M}$$ at equilibrium (a.k.a. the vapour pressure).
The shape might affect the flux at which the dynamic exchange between vapour and liquid phase $$\ce{M}$$ is taking place, but nothing more.
|
# High frequency data in stochastics
Mark Podolskij
(Aarhus University)
Thieleseminar
Torsdag, 13 marts, 2014, at 13:15-14:00, in Koll. G (1532-214)
Abstrakt:
High frequency data is nowadays commonly observed in various applied sciences. The notion of high frequency refers to the sampling scheme, where the distance between two consecutive observations converges to zero while the total interval remains fixed. We will present some (incomplete) overview about limit theory for high frequency observations for different classes of processes, such as e.g.semimartingales, and their use in statistics.
A major part of the lecture will be devoted to some recent developments in the framework of Levy moving average processes.
Organiseret af: The T.N. Thiele Centre
Kontaktperson: Søren Asmussen
|
Question
The 300-m-diameter Arecibo radio telescope pictured in Figure 27.28 detects radio waves with a 4.00 cm average wavelength. (a) What is the angle between two just-resolvable point sources for this telescope? (b) How close together could these point sources be at the 2 million light year distance of the Andromeda galaxy?
Question Image
1. $1.63 \times 10^{-4} \textrm{ rad}$
2. $325 \textrm{ ly}$
Solution Video
|
## 過去の記録
#### 応用解析セミナー
16:00-17:30 数理科学研究科棟(駒場) 002号室
On a phase field model for mode III crack growth
[ 講演概要 ]
2次元弾性体の面外変形による亀裂の進展を記述する,ある
フェイズ・フィールド・モデルについて考える.モデルの
Griffithの破壊基準をもとに,Ambrosio-Tortorelliに
よるエネルギー正則化のアイデアを用いてなされる.
#### 作用素環セミナー
16:30-18:00 数理科学研究科棟(駒場) 126号室
Alin Ciuperca 氏 (Univ. Toronto)
Isomorphism of Hilbert modules over stably finite $C^*$-algebras
#### Lie群論・表現論セミナー
13:30-17:20 数理科学研究科棟(駒場) 050号室
Quantization of complex manifolds
[ 講演参考URL ]
http://www.ms.u-tokyo.ac.jp/~toshi/index.files/oshima60th200901.html
Global geometry on locally symmetric spaces — beyond the Riemannian case
[ 講演概要 ]
The local to global study of geometries was a major trend of 20th century geometry, with remarkable developments achieved particularly in Riemannian geometry.
In contrast, in areas such as Lorentz geometry, familiar to us as the space-time of relativity theory, and more generally in pseudo-Riemannian geometry, as well as in various other kinds of geometry (symplectic, complex geometry, ...), surprising little is known about global properties of the geometry even if we impose a locally homogeneous structure.
In this talk, I plan to give an exposition on the recent developments on the question about the global natures of locally non-Riemannian homogeneous spaces, with emphasis on the existence problem of compact forms, rigidity and deformation.
Classification of Fuchsian systems and their connection problem
[ 講演概要 ]
We explain a classification of Fuchsian systems on the Riemann sphere together with Katz's middle convolution, Yokoyama's extension and their relation to a Kac-Moody root system discovered by Crawley-Boevey.
Then we present a beautifully unified connection formula for the solution of the Fuchsian ordinary differential equation without moduli and apply the formula to the harmonic analysis on a symmetric space.
### 2009年01月14日(水)
#### 講演会
16:00-17:30 数理科学研究科棟(駒場) 002号室
[ 講演概要 ]
ニューロンの興奮性の調節やシナプス可塑性において重要な役割を担っているC型タンパク質リン酸化酵素(PKC)は,その酵素活性と関連して細胞内局在が変化する性質を有する(トランスロケーション).GFP-γPKC融合タンパクを発現させたマウス小脳プルキンエ細胞において,平行線維シナプスの高頻度刺激に伴い,刺激部位近傍から樹状突起に沿ってトランスロケーションが伝播する現象が報告されている.最近,坪川は,同じ刺激条件で樹状突起内をほぼ同速度で伝播する細胞内Ca2+波が生じることを見出し,これがγPKCトランスロケーション波をリードしている可能性を指摘した.本研究では,生理学的・解剖学的知見に基づいたプルキンエ細胞の数理モデルを構築し,Ca2+波の再現を試みた.その結果に基づき,トランスロケーション伝播のメカニズムと機能的意義について考察する.
### 2009年01月13日(火)
#### 講演会
10:30-11:30 数理科学研究科棟(駒場) 002号室
この講義で単位は出ません。
Gieri Simonett 氏 (Vanderbilt University, USA)
Analytic semigroups, maximal regularity and nonlinear parabolic problems
[ 講演参考URL ]
http://www.math.sci.hokudai.ac.jp/sympo/090113/index.html
#### トポロジー火曜セミナー
16:30-17:30 数理科学研究科棟(駒場) 002号室
Tea: 16:00 - 16:30 コモンルーム
Compactification of the homeomorphism group of a graph
[ 講演概要 ]
Topological properties of homeomorphism groups, especially of finite-dimensional manifolds,
have been of interest in the area of infinite-dimensional manifold topology.
For a locally finite graph $\\Gamma$ with countably many components,
the homeomorphism group $\\mathcal{H}(\\Gamma)$
and its identity component $\\mathcal{H}_+(\\Gamma)$ are topological groups
with respect to the compact-open topology. I will define natural compactifications
$\\overline{\\mathcal{H}}(\\Gamma)$ and
$\\overline{\\mathcal{H}}_+(\\Gamma)$ of these groups and describe the
topological type of the pair $(\\overline{\\mathcal{H}}_+(\\Gamma), \\mathcal{H}_+(\\Gamma))$
using the data of $\\Gamma$. I will also discuss the topological structure of
$\\overline{\\mathcal{H}}(\\Gamma)$ where $\\Gamma$ is the circle.
### 2009年01月12日(月)
#### Lie群論・表現論セミナー
16:30-18:00 数理科学研究科棟(駒場) 126号室
[ 講演概要 ]
#### 解析学火曜セミナー
16:30-18:00 数理科学研究科棟(駒場) 128号室
Jacob S. Christiansen
(コペンハーゲン大学)
Finite gap Jacobi matrices (joint work with Barry Simon and Maxim Zinchenko)
### 2009年01月09日(金)
#### GCOEレクチャーズ
17:00-18:00 数理科学研究科棟(駒場) 123号室
Eric Opdam 氏 (University of Amsterdam)
The spectral category of Hecke algebras and applications 第2講 Affine Hecke algebras and harmonic analysis.
[ 講演参考URL ]
http://www.ms.u-tokyo.ac.jp/~toshi/seminar/ut-seminar2009.html#20090108opdam
#### 講演会
16:00-17:00 数理科学研究科棟(駒場) 370号室
Leevan Ling 氏 (Hong Kong Baptist University)
Effective Condition Numbers and Laplace Equations
[ 講演概要 ]
The condition number of a matrix is commonly used for investigating the
stability of solutions to linear algebraic systems. Recent meshless
techniques for solving PDEs have been known to give rise to
ill-conditioned matrices, yet are still able to produce results that are
close to machine accuracy. In this work, we consider the method of
fundamental solutions (MFS), which is known to solve, with extremely high
accuracy, certain
partial differential equations, namely those for which a fundamental
solution is known. To investigate the applicability of the MFS, either when
the boundary is not analytic or when the boundary data is not harmonic, we
examine the relationship between its accuracy and the effective condition
number.
#### GCOEセミナー
16:00-17:00 数理科学研究科棟(駒場) 370号室
Leevan Ling 氏 (Hong Kong Baptist University)
Effective Condition Numbers and Laplace Equations
[ 講演概要 ]
The condition number of a matrix is commonly used for investigating the stability of solutions to linear algebraic systems. Recent meshless techniques for solving PDEs have been known to give rise to ill-conditioned matrices, yet are still able to produce results that are close to machine accuracy. In this work, we consider the method of fundamental solutions (MFS), which is known to solve, with extremely high accuracy, certain partial differential equations, namely those for which a fundamental solution is known. To investigate the applicability of the MFS, either when the boundary is not analytic or when the boundary data is not harmonic, we examine the relationship between its accuracy and the effective condition number.
### 2009年01月08日(木)
#### 作用素環セミナー
16:30-18:00 数理科学研究科棟(駒場) 128号室
Stefaan Vaes 氏 (K. U. Leuven)
Rigidity for II$_1$ factors: fundamental groups, bimodules, subfactors
#### 作用素環セミナー
16:30-18:00 数理科学研究科棟(駒場) 128号室
Stefaan Vaes 氏 (K. U. Leuven)
Rigidity for II$_1$ factors: fundamental groups, bimodules, subfactors
#### 諸分野のための数学研究会
16:30-17:30 数理科学研究科棟(駒場) 056号室
※今回は開催日、開催時間共に通常と異なりますことをご了承下さい。
Calibration problems for Black-Scholes American Options under the GMMY process
[ 講演概要 ]
The calibration problem is formulated as a control problem for the parabolic variational inequality. The well-posedness of the formulation is discussed and the necessary optimality is derived. A numerical approximation method is also presented.
#### GCOEレクチャーズ
17:00-18:00 数理科学研究科棟(駒場) 123号室
Eric Opdam 氏 (University of Amsterdam)
The spectral category of Hecke algebras and applications
[ 講演概要 ]
Hecke algebras play an important role in the harmonic analysis of a p-adic reductive group. On the other hand, their representation theory and harmonic analysis can be described almost completely explicitly. This makes affine Hecke algebras an ideal tool to study the harmonic analysis of p-adic groups. We will illustrate this in this series of lectures by explaining how various components of the Bernstein center contribute to the level-0 L-packets of tempered representations, purely from the point of view of harmonic analysis.
We define a "spectral category" of (affine) Hecke algebras. The morphisms in this category are not algebra morphisms but are affine morphisms between the associated tori of unramified characters, which are compatible with respect to the so-called Harish-Chandra μ-functions. We show that such a morphism generates a Plancherel measure preserving correspondence between the tempered spectra of the two Hecke algebras involved. We will discuss typical examples of spectral morphisms.
We apply the spectral correspondences of affine Hecke algebras to level-0 representations of a quasi-split simple p-adic group. We will concentrate on the example of the special orthogonal groups $SO_{2n+1}(K)$. We show that all affine Hecke algebras which arise in this context admit a *unique* spectral morphism to the Iwahori-Matsumoto Hecke algebra, a remarkable phenomenon that is crucial for this method. We will recover in this way Lusztig's classification of "unipotent" representations.
[ 講演参考URL ]
http://www.ms.u-tokyo.ac.jp/~toshi/seminar/ut-seminar2009.html#20090108opdam
### 2009年01月06日(火)
#### 講演会
16:00-17:30 数理科学研究科棟(駒場) 123号室
GCOE連続講演会 「電気生理学における数理モデル」 (3回講演の第3回)
[ 講演概要 ]
・ 心臓の生理学
・ 3次元ケーブルモデル
・ 均質化極限とbidomain モデル
・ 心臓における興奮波の伝播
#### 講演会
14:00-15:30 数理科学研究科棟(駒場) 123号室
GCOE連続講演会 「電気生理学における数理モデル」 (3回講演の第2回)
[ 講演概要 ]
・ Hodgkin-Huxley モデルとFitzHugh-Nagumo モデル
・ 神経軸策とケーブルモデル
・ 活動電位の伝播
・ 有髄神経と跳躍伝導
### 2009年01月05日(月)
#### 講演会
16:00-17:30 数理科学研究科棟(駒場) 123号室
GCOE連続講演会 「電気生理学における数理モデル」 (3回講演の第1回)
[ 講演概要 ]
・ 膜電位とイオンチャネル
・ 細胞の体積調節
・ チャネルの開閉
・ Hodgkin-Huxley モデルと興奮性
・ Hodgkin-Huxley モデルとFitzHugh-Nagumo モデル
・ 神経軸策とケーブルモデル
・ 活動電位の伝播
・ 有髄神経と跳躍伝導
・ 心臓の生理学
・ 3次元ケーブルモデル
・ 均質化極限とbidomain モデル
・ 心臓における興奮波の伝播
### 2008年12月26日(金)
#### 保型形式の整数論月例セミナー
13:30-16:00 数理科学研究科棟(駒場) 123号室
On Siegel Eisenstein series of degree two and weight 2
[ 講演概要 ]
Cups singularities の組み合わせ論的な解析を援用して、あるレベルのモジュラー群に対する表題の空間の次元を決定する。
### 2008年12月19日(金)
#### GCOE社会数理講演シリーズ
16:20-17:50 数理科学研究科棟(駒場) 128号室
### 2008年12月18日(木)
#### 作用素環セミナー
16:30-18:00 数理科学研究科棟(駒場) 128号室
Benoit Collins 氏 (東大数理/Ottawa 大学)
Some geometric and probabilistic properties of the free quantum group $A_o(n)$
### 2008年12月17日(水)
#### 統計数学セミナー
13:40-14:50 数理科学研究科棟(駒場) 002号室
Ilia Negri 氏 (University of Bergamo, Italy)
Goodness of fit tests for ergodic diffusions by discrete sampling schemes
[ 講演概要 ]
We consider a nonparametric goodness of fit test problem for the drift coefficient of one-dimensional ergodic diffusions, where the diffusion coefficient is a nuisance function which is estimated in some sense. Using a theory for the continuous observation case, we construct two kinds of tests based on different types of discrete observations, namely, the data observed discretely in time or in space. We prove that the limit distribution of our tests is the supremum of the standard Brownian motion, and thus our tests are asymptotically distribution free. We also show that our tests are consistent under any fixed alternatives.
joint with Yoichi Nishiyama (Inst. Statist. Math.)
[ 講演参考URL ]
http://www.ms.u-tokyo.ac.jp/~kengok/statseminar/2008/09.html
#### 統計数学セミナー
15:00-16:10 数理科学研究科棟(駒場) 002号室
Stefano Maria Iacus 氏 (Universita degli Studi di Milano, Italy)
Divergences Test Statistics for Discretely Observed Diffusion Processes
[ 講演概要 ]
In this paper we propose the use of $\\phi$-divergences as test statistics to verify simple hypotheses about a one-dimensional parametric diffusion process dXt = b(Xt, theta)dt + sigma(Xt, theta) dWt, from discrete observations at times ti = i*Dn, i=0, 1, ..., n, under the asymptotic scheme Dn - 0, n*Dn - +oo and n*Dn^2 - 0. The class of phi-divergences is wide and includes several special members like Kullback-Leibler, Renyi, power and alpha-divergences. We derive the asymptotic distribution of the test statistics based on phi- divergences. The limiting law takes different forms depending on the regularity of phi. These convergence differ from the classical results for independent and identically distributed random variables. Numerical analysis is used to show the small sample properties of the test statistics in terms of estimated level and power of the test.
joint work with A. De Gregorio
[ 講演参考URL ]
http://www.ms.u-tokyo.ac.jp/~kengok/statseminar/2008/10.html
#### 統計数学セミナー
16:20-17:30 数理科学研究科棟(駒場) 002号室
Nicolas Privault 氏 (City University of Hong Kong)
Stein estimation of Poisson process intensities
[ 講演概要 ]
In this talk we will construct superefficient estimators of Stein type for the intensity parameter lambda > 0 of a Poisson process, using integration by parts and superharmonic functionals on the Poisson space.
[ 講演参考URL ]
http://www.ms.u-tokyo.ac.jp/~kengok/statseminar/2008/11.html
### 2008年12月12日(金)
#### GCOE社会数理講演シリーズ
16:20-17:50 数理科学研究科棟(駒場) 128号室
ガスタービン翼の伝熱について
|
Characterization of weak Lebesgue spaces - MathOverflow [closed] most recent 30 from http://mathoverflow.net 2013-05-22T06:40:18Z http://mathoverflow.net/feeds/question/80246 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/80246/characterization-of-weak-lebesgue-spaces Characterization of weak Lebesgue spaces unknown (google) 2011-11-06T21:24:40Z 2011-11-08T08:43:56Z <p>I would be interested to know whether the following is true:</p> <p>Let $\Omega$ be a bounded open set in $\mathbf{R}^n$. Let $g$ be a nonnegative function $g : \Omega \to \mathbf{R}$. If there is a constant $C > 0$ such that $$\frac{1}{|A|^{1-1/p}} \int_A g \leq C$$ for all measurable subset $A \subset \Omega$, then $g$ is in weak-$L^p(\Omega)$.</p> <p>If the above inequality holds only for open balls in $\Omega$, is $g$ still in weak-$L^p(\Omega)$ ?</p> <p>Edit: I changed the question to make it more relevant and less naive. Most comments below are out of date.</p>
|
American Institute of Mathematical Sciences
August 2015, 8(4): 649-691. doi: 10.3934/dcdss.2015.8.649
Rate-independent memory in magneto-elastic materials
1 Università degli Studi del Sannio, P.zza Roma, 21 - 82100, Benevento, Italy, Italy
Received February 2014 Revised July 2014 Published October 2014
These notes origin from a group of lectures given at the Spring School on Rate-independent evolutions and hysteresis modelling'' (Hystry 2013), held at Politecnico di Milano and at Università degli Studi di Milano, from May 27 until May 31, 2013. They are addressed to Graduate students in mathematics and applied science, interested in modeling rate-independent effects in smart systems. Therefore, they aim to provide the basic issues concerning modeling of multi-functional materials showing memory phenomena, with emphasis to magnetostrictives, in view of their application to the design of smart devices. Such tutorial summarizes several years activity on these issues that involved the cooperation with several colleagues, among all Dr. P. Krejčí, with whom the authors are indebted.
Citation: Daniele Davino, Ciro Visone. Rate-independent memory in magneto-elastic materials. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : 649-691. doi: 10.3934/dcdss.2015.8.649
References:
show all references
References:
[1] Claudio Giorgi. Phase-field models for transition phenomena in materials with hysteresis. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : 693-722. doi: 10.3934/dcdss.2015.8.693 [2] Rod Cross, Hugh McNamara, Leonid Kalachev, Alexei Pokrovskii. Hysteresis and post Walrasian economics. Discrete & Continuous Dynamical Systems - B, 2013, 18 (2) : 377-401. doi: 10.3934/dcdsb.2013.18.377 [3] Vincenzo Recupero. Hysteresis operators in metric spaces. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : 773-792. doi: 10.3934/dcdss.2015.8.773 [4] J. Samuel Jiang, Hans G. Kaper, Gary K Leaf. Hysteresis in layered spring magnets. Discrete & Continuous Dynamical Systems - B, 2001, 1 (2) : 219-232. doi: 10.3934/dcdsb.2001.1.219 [5] Jana Kopfová. Nonlinear semigroup methods in problems with hysteresis. Conference Publications, 2007, 2007 (Special) : 580-589. doi: 10.3934/proc.2007.2007.580 [6] Martin Brokate, Pavel Krejčí. Weak differentiability of scalar hysteresis operators. Discrete & Continuous Dynamical Systems, 2015, 35 (6) : 2405-2421. doi: 10.3934/dcds.2015.35.2405 [7] Antonio DeSimone, Natalie Grunewald, Felix Otto. A new model for contact angle hysteresis. Networks & Heterogeneous Media, 2007, 2 (2) : 211-225. doi: 10.3934/nhm.2007.2.211 [8] Pavel Gurevich. Periodic solutions of parabolic problems with hysteresis on the boundary. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1041-1083. doi: 10.3934/dcds.2011.29.1041 [9] Michela Eleuteri, Pavel Krejčí. An asymptotic convergence result for a system of partial differential equations with hysteresis. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1131-1143. doi: 10.3934/cpaa.2007.6.1131 [10] Alexandra Köthe, Anna Marciniak-Czochra, Izumi Takagi. Hysteresis-driven pattern formation in reaction-diffusion-ODE systems. Discrete & Continuous Dynamical Systems, 2020, 40 (6) : 3595-3627. doi: 10.3934/dcds.2020170 [11] Emil Minchev, Mitsuharu Ôtani. $L^∞$-energy method for a parabolic system with convection and hysteresis effect. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1613-1632. doi: 10.3934/cpaa.2018077 [12] Augusto Visintin. P.D.E.s with hysteresis 30 years later. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : 793-816. doi: 10.3934/dcdss.2015.8.793 [13] Pavel Krejčí. The Preisach hysteresis model: Error bounds for numerical identification and inversion. Discrete & Continuous Dynamical Systems - S, 2013, 6 (1) : 101-119. doi: 10.3934/dcdss.2013.6.101 [14] Augusto Visintin. Ohm-Hall conduction in hysteresis-free ferromagnetic processes. Discrete & Continuous Dynamical Systems - B, 2013, 18 (2) : 551-563. doi: 10.3934/dcdsb.2013.18.551 [15] Youssef Amal, Martin Campos Pinto. Global solutions for an age-dependent model of nucleation, growth and ageing with hysteresis. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 517-535. doi: 10.3934/dcdsb.2010.13.517 [16] Dmitrii Rachinskii. Realization of arbitrary hysteresis by a low-dimensional gradient flow. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 227-243. doi: 10.3934/dcdsb.2016.21.227 [17] Takanobu Okazaki. Large time behaviour of solutions of nonlinear ode describing hysteresis. Conference Publications, 2007, 2007 (Special) : 804-813. doi: 10.3934/proc.2007.2007.804 [18] Stefano Bosia, Michela Eleuteri, Elisabetta Rocca, Enrico Valdinoci. Preface: Special issue on rate-independent evolutions and hysteresis modelling. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : i-i. doi: 10.3934/dcdss.2015.8.4i [19] Steffen Härting, Anna Marciniak-Czochra, Izumi Takagi. Stable patterns with jump discontinuity in systems with Turing instability and hysteresis. Discrete & Continuous Dynamical Systems, 2017, 37 (2) : 757-800. doi: 10.3934/dcds.2017032 [20] Xiao-Ping Wang, Xianmin Xu. A dynamic theory for contact angle hysteresis on chemically rough boundary. Discrete & Continuous Dynamical Systems, 2017, 37 (2) : 1061-1073. doi: 10.3934/dcds.2017044
2019 Impact Factor: 1.233
|
# Proof that there are infinitely many $k$'s such that $a + k$ and $b + k$ are coprime
I need to show that for any $$a, b \in \mathbb{Z}^+$$ with $$a \neq b$$ there are infinitely many $$k \in \mathbb{Z}$$ such that $$a + k$$ and $$b + k$$ are relatively prime to each other.
I came up with a proof that uses the fact that there are infinitely many primes and that we can always choose $$k$$ such that $$a + k$$ is prime and therefor $$b + k$$ is relatively prime to $$a + k$$ (assuming that $$a > b$$).
But I was given the hint $$\gcd(x, y) = \gcd(x, y - zx)$$ which I don't use in the proof that I came up with. Hence my question is, if there is a way to show this fact by only using this hint and some other basic facts about the gcd.
Assume that $$a>b$$. There are infinitely many numbers $$m>a$$ such that $$\gcd(m,a-b)=1$$ (since $$a-b$$ only has finitely many prime factors). In other words, there are infinitely many $$k$$'s such that $$\gcd(a+k,a-b)=1$$. But$$\gcd(a+k,a-b)=\gcd\bigl(a+k,(a+k)-(b+k)\bigr)=\gcd(a+k,b+k).$$
• The reason there are infinitely many $m>a$ co-prime to $a-b$ is that $\{1+n|a-b|: n\in \Bbb N\}$ is infinite. – DanielWainfleet Jan 16 at 13:51
|
# Additional datasets: Structural type of dwelling by document type
## Background
Through collaboration with the Canadian Mortgage and Housing Corporation (CMHC), CensusMapper has added and open-sourced a special cross-tabulation with Structural Type of Dwelling by Document Type down to the Census Tract level for the census years 2001, 2006, 2011 and 2016. Structural Type of Dwelling is a common census variable that describes the type of structure a dwelling unit is in. Document Type is a less frequently used variable that classifies whether the census determined the dwelling is either:
• occupied by usual residents (also known as a household);
• occupied by temporarily present persons; or,
• unoccupied.
This cross-tabulation has information on the structural type of the entire building stock, not just the occupied dwelling units. This is useful when trying to understand the built-up fabric of urban environments.
As an example, we look at the structure of the dwelling stock in the City of Toronto in 2016.
## Example usage: buildings unoccupied vs not occupied by usual residents
Dwellings registered as unoccupied on Census day capture the imagination of many, although people often mistakenly pull data on dwellings not occupied by usual residents as it is easily available in the standard Census profile data. The advantage of this custom cross-tabulation is that it allows researchers to zoom in on dwellings that were classified as unoccupied by the enumerator on Census day for additional detail.
# Packages used in this example
library(cancensus)
library(dplyr)
library(tidyr)
library(ggplot2)
In this example, we want to retrieve the custom structural dwelling cross-tab for the 2016 Census year with the code CA16xSD for the Toronto Census subdivision with the standard Statistics Canada region code 3520005. For more background on searching for Census geographical regions, see ?list_census_regions() or the Get started with cancensus vignette.
# Attribution for the dataset to be used in graphs
# Select all variables base variables, this gives us total counts by structural type of dwelling
vars <- list_census_vectors("CA16xSD") %>%
filter(is.na(parent_vector))
variables <- setNames(vars$vector,vars$label)
variables
The named vector labels the census variables we are about to query.
# Separate out the individual dwelling types
dwelling_types <- setdiff(names(variables),"Total dwellings")
# Grab the census data and compute shares for each dwelling type
census_data <- get_census("CA16xSD",regions=list(CSD="3520005"), vectors = variables, quiet = TRUE) %>%
pivot_longer(cols = all_of(dwelling_types)) %>%
mutate(share=value/Total dwellings)
To visualize what this looks like on a bar chart:
ggplot(census_data,aes(x=reorder(name,share),y=share)) +
geom_bar(stat="identity",fill="steelblue") +
coord_flip() +
scale_y_continuous(labels=scales::percent) +
labs(title="City of Toronto dwelling units by structural type",
x=NULL,y=NULL,caption=attribution)
As with regular Census data, all data can be retrieved as spatial data. Sometimes it’s easier to use the CensusMapper API interface to search for and select the variables we are interested in. The explore_census_vectors() function opens a browser with the variable selection tool, we determine that “v_CA16xSD_1” and “v_CA16xSD_28” are the variables enumerating all dwellings and all unoccupied dwellings, respectively.
# Use explore_census_vectors() to browse and select variables of interest
vars <- c(Total="v_CA16xSD_1", Unoccupied="v_CA16xSD_28")
# Retrieve data with attached geography
census_data <- get_census("CA16xSD",regions=list(CSD="3520005"), level="CT", quiet = TRUE, geo_format = "sf",
vectors = vars,use_cache = FALSE) %>%
mutate(share=Unoccupied/Total)
# Visualize
ggplot(census_data,aes(fill=share)) +
geom_sf(size=0.1) +
scale_fill_viridis_c(labels=scales::percent) +
coord_sf(datum=NA) +
labs(title="City of Toronto dwellings unoccupied on census day",
fill="Share",
x=NULL,y=NULL,caption=attribution)
|
# Inverse(?) of Survival Analysis
I've been using the R package 'survival' recently.
I understand the way to read the survival curves is given time X what is the percent of widgets still in the field Y. And I can get a confidence interval around Y.
But I've been asked if I can go the other way around. Given a percent still in field Y, can I get a time X (and a confidence interval around X) that matches?
I'm looking for either a function in the R package 'survival', or the theory behind generating this.
This is called prediction of the remaining life. There are many methods to do this, parametric and semiparametric. See for instance:
Prediction of remaining life of power transformers based on left truncated and right censored lifetime data
The idea is to estimate the probability that individual $j$ will fail/die at time $t$ given that it has survived until $t_j$. This is $P(T\leq t\vert T>t_j)$, with $t>t_j$.
|
# On the symbol length of symbols in Galois cohomology
Seminar
Speaker
Eliyahu Matzri (Bar-Ilan University)
Date
22/12/2021 - 11:30 - 10:30Add to Calendar 2021-12-22 10:30:00 2021-12-22 11:30:00 On the symbol length of symbols in Galois cohomology Let $F$ be a field with absolute Galois group $G_F$, $p$ be a prime, and $\mu_{p^e}$ be the $G_F$-module of roots of unity of order dividing $p^e$ in a fixed algebraic closure of $F$. Let $\alpha \in H^n(F,\mu_{p^e}^{\otimes n})$ be a symbol (i.e $\alpha=a_1\cup \dots \cup a_n$ where $a_i\in H^1(F, \mu_{p^e})$) with effective exponent $p^{e-1}$ (that is $p^{e-1}\alpha=0 \in H^n(G_F,\mu_p^{\otimes n})$. In this work we show how to write $\alpha$ as a sum of symbols from $H^n(F,\mu_{p^{e-1}}^{\otimes n})$. If $n>3$ and $p\neq 2$ we assume $F$ is prime to $p$ closed. Third floor seminar room, Mathematics building, and on Zoom. See link below. אוניברסיטת בר-אילן - המחלקה למתמטיקה [email protected] Asia/Jerusalem public
Place
Third floor seminar room, Mathematics building, and on Zoom. See link below.
Abstract
Let $F$ be a field with absolute Galois group $G_F$, $p$ be a prime, and $\mu_{p^e}$ be the $G_F$-module of roots of unity of order dividing $p^e$ in a fixed algebraic closure of $F$.
Let $\alpha \in H^n(F,\mu_{p^e}^{\otimes n})$ be a symbol (i.e $\alpha=a_1\cup \dots \cup a_n$ where $a_i\in H^1(F, \mu_{p^e})$) with effective exponent $p^{e-1}$ (that is $p^{e-1}\alpha=0 \in H^n(G_F,\mu_p^{\otimes n})$. In this work we show how to write $\alpha$ as a sum of symbols from $H^n(F,\mu_{p^{e-1}}^{\otimes n})$. If $n>3$ and $p\neq 2$ we assume $F$ is prime to $p$ closed.
תאריך עדכון אחרון : 19/12/2021
|
# How do I determine if a complex number $w = f(z_1,\,z_2,\,z_3)$ is a known triangle center?
I've found a number of closely related functions, each of which takes three complex numbers $$z_1,\,z_2$$, and $$z_3$$ (which we can consider as the vertices of a triangle) as its arguments, and outputs a complex number $$w$$. A few of these functions represent well-known triangle centers $$w$$, such as the centroid (that is, $$w=(z_1+z_2+z_3)/3)$$), or the zeros and critical points of the polynomial $$R(z) := (z-z_1)(z-z_2)(z-z_3)$$. Other functions among them are less obvious, such as the following point $$w=p$$:
$$p := \frac{(z_2+z_3) z_1^2+\left(z_2^2-6z_2z_3+z_3^2\right) z_1+z_2z_3(z_2+z_3)\sqrt{-3(z_1-z_2)^2(z_1-z_3)^2(z_2-z_3)^2}}{2 \left(z_1^2+z_2^2+z_3^2-z_1z_2-z_1z_3-z_2z_3\right)}$$
As far as I can tell, by comparing the point $$p$$ to different triangle centers in GeoGebra, it seems to be the first isodynamic point to a high degree of precision for the triangles that I've thrown at it so far. My problem is twofold:
$$(1)$$ How do I prove that $$p$$ is (or isn't) the first isodynamic point of the triangle ($$z_1,\,z_2,\,z_3$$)?
$$(2)$$ How do I (numerically) match $$w$$ with likely triangle centers based on the Encyclopedia of Triangle Centers? GeoGebra uses a small portion of the material in the ETC, but unfortunately, it fails to implement some important triangle centers with low indices. Suggestions for other, similar resources are welcome, too.
As for $$(1)$$, I think a viable starting point might be to convert $$p$$ into barycentric or normalized trilinear coordinates and directly compare the result to the coordinates given in the ETC, but I'm not sure how to do this. It seems to be slightly easier to use Cartesian coordinates as a starting point, but Mathematica (for example) generally doesn't seem to be very inclined to return the real and imaginary parts of $$p$$ (for general $$z_k$$, e.g. when you let $$z_k = a_k + i b_k$$ for real numbers $$a_k,\,b_k,\,k=1,2,3,$$ and simplify the expressions).
Regarding $$(2)$$, the following website can supposedly be used to compare points to almost all entries in the ETC: https://faculty.evansville.edu/ck6/encyclopedia/Search_6_9_13.html
Unfortunately, I seem to get inconsistent results from my attempts to implement the algorithm on the website above in Mathematica. More specifically, certain triangle centers that I've tested (such as the centroid) yield the correct coordinates in the list, while other obvious ones that I've tested are either not listed, or have the wrong index. I suspect that I've misunderstood the information on the website in some elementary way, so feel free to correct my algorithm (for $$p$$ above, in the example that follows):
$$(i)$$ Choose $$z_1,\,z_2,\,z_3$$ as the vertices of a triangle with side lengths $$6,\,9,$$ and $$13$$, e.g. $$z_1 = 0,\,z_2 = 9,$$ and $$z_3 = (-26/9) + (8/9)\sqrt{35}i$$.
$$(ii)$$ Solve the following linear system of equations for $$u,v,w$$:
$$\begin{cases} u\,\text{Re}(z_1) + v\,\text{Re}(z_2) + w\,\text{Re}(z_3) = \text{Re}(p)\\ u\,\text{Im}(z_1) + v\,\text{Im}(z_2) + w\,\text{Im}(z_3) = \text{Im}(p)\\ u + v + w = 1, \end{cases}$$
where $$u:v:w$$ are the barycentric coordinates for $$p$$. (Here, $$p$$ can be approximated to a high degree of precision, e.g. to avoid problems with Mathematica...)
$$(iii)$$ Let $$a=6,\,b=9$$, and $$c=13$$ (or, more ideally, just use the Pythagorean theorem...), and define $$x = u/a,\,y = v/b,\,z = w/c$$. Furthermore, calculate the area of the triangle as $$A = 4\sqrt{35}$$.
$$(iv)$$ Calculate $$kx = 2Ax/(ax+by+cz) = 2Ax/(u+v+w)$$, which should be the sought "coordinate" in the table.
Carrying out the algorithm above yields $$kx \approx 0.14368543660$$, whereas the coordinate for the first isodynamic point (with index $$15$$ in the table) is $$\sim 3.10244402065$$. Perhaps $$p$$ isn't actually the first isodynamic point, but I like to think that I've done something wrong, like interpreting $$a,\,b,$$ and $$c$$ as a triangle's side lengths...
|
Author:
F8CONE8 Posts: 18,106
7/4/09 9:00 P
I am so glad for you! Hapy 4th of July.
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
CRAFTYC Posts: 3,501
7/4/09 7:09 P
Happy 4th! One nation, Under God, Indivisible, with Liberty and Justice for All!
We're happy to celebrate, and even more so because DH is actually home!!! It's been few and far between that we've spent time together this year, so it means a lot to have him here for a couple of days.
current weight: 120.0
125
121.25
117.5
113.75
110
JENNIRENEE Posts: 122
5/11/09 12:39 P
Thanks. Im starting to check at stores around my area. If I can I would like to avoid to iron ons because they seem to warp in the dryer. I want something that will last past the 2nd wash ya know? lol.
current weight: 166.0
175
166.25
157.5
148.75
140
F8CONE8 Posts: 18,106
5/8/09 12:25 A
There are iron on transfer for ink jet priters but I don't know if they work well. There is a store in town that will put pictures on blankets so you might try an online photo or quilt supply store. It sounds like a wonderful idea.
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
JENNIRENEE Posts: 122
5/6/09 1:07 P
Hmm I am thinking of a putting together a photo quilt for my grandma for christmas but I dont know where I would go to get the picture blocks done...how do you do that? I want to use the pictures of my neices and nephews because she lives so far away. My job is merely to send her photos throughout the year and I want her to have something to cuddle in when she gets homesick. Can Any Of you Guys help me out here? Thanks!
current weight: 166.0
175
166.25
157.5
148.75
140
F8CONE8 Posts: 18,106
5/1/09 11:06 P
Very good idea. I've seen a few patterns and even used that idea for a pottery piece I made. Hope it turns out well.
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
ZTRUTH Posts: 1,557
5/1/09 11:57 A
I wasn't going to start anything new until I get some projects done... BUT today I'm going to paint a stone like a Ladybug :o) and maybe one like a Lovebug whatever that looks like because that is what I call my girls.
Darkness hides the true size of fear, lies and regrets.
To read without reflection is a waste of time.
Pounds lost: 0.0
0
5.5
11
16.5
22
F8CONE8 Posts: 18,106
4/30/09 3:30 P
I really enjoy quilting but that is a winter time activity for me. I love being outdoors and now is the time to do it.
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
ZTRUTH Posts: 1,557
4/30/09 12:31 P
think I'm done with quilting but I'm FINALLY doing my bubble quilt or should I say bedspread.. we'll see how it develops. just have to get stuffing all them squares haha.
Darkness hides the true size of fear, lies and regrets.
To read without reflection is a waste of time.
Pounds lost: 0.0
0
5.5
11
16.5
22
JENNIRENEE Posts: 122
4/6/09 10:25 P
quilting is definately hard work. My mom is a quilter and has just recently purchased a quilting machine for putting them all together so she is hoping to start a buisness. It is more a family business with all our craft. She has 8 kids and we eat have a different craft. It is kind of neat. We all love watching her put them together though. Some have even ended up being family projects.
current weight: 166.0
175
166.25
157.5
148.75
140
F8CONE8 Posts: 18,106
3/29/09 1:13 P
That sounds good. Right now i am really having fun with quilting. I am a novice and learning a lot. I do need to take some time and get my photos ready for the museum though. Sigh, I am excited but hate the prep work,
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
JENNIRENEE Posts: 122
3/25/09 8:53 P
what I am doing is using scrapbooking material and making a hanging collage for my parents of all their kids and grandkids. They ended up working great because of their size.
current weight: 166.0
175
166.25
157.5
148.75
140
TIME_FLIES Posts: 3,620
3/25/09 7:56 P
What kind of project are you doing with old cds? I've never really found one I liked. I painted one, but didn't know what to do with it. I didn't like hanging it up or anything. I'm not much of a doo-dad person.
current weight: 182.5
194
181.75
169.5
157.25
145
JENNIRENEE Posts: 122
3/24/09 10:28 P
*sigh* I could really use some support today. Not only was a starting to like a guy because I though he liked me but I am at a loss because my counts have started dropping again after a pretty good run and now all the sudden I havent heard from him in like 3 days. I keep thinking that maybe he is busy or something but it isnt like him. It stinks because I already havent been feeling good and now he just added stress but at the same time, I have been gaining weight because of some of the meds I am on (I have had cancer for 2 1/2 years now) and people keep mentioning it. :0( I really need to do something about not so much the weight but at least eating better and gaining some energy back. I think at this moment I am crushed. :0(
On a plus side, I did find out that my best friends are having a little girl (due in July) and I am getting ready to start knitting a blanket for them. :0)
current weight: 166.0
175
166.25
157.5
148.75
140
JENNIRENEE Posts: 122
3/14/09 7:39 P
So I definately still havent completely project using my old cds but i am working on it. i have been a bit under the weather but today I did break out my paints and started working on painting some flower pots to use as wedding centerpieces. Its a Frog Prince Theme. I am really excited but it is a lot of work! I will post some pictures here on my blog in a sec. So feel free to stop by and check out my first one. I need a smaller brush for outlining though. :0(
current weight: 166.0
175
166.25
157.5
148.75
140
JENNIRENEE Posts: 122
3/14/09 2:45 P
Can you Help Our Team?
As some of you know from my groups, I was diagnosed with a rare form of sarcoma in 2006. I was given 6 months to live however 2 years later here i am still fighting. For the last 2 years my team (Team Snowflake) and I have participated in the events for St. baldricks. For those who don't know what this is. St. baldrick's is an organization where people volunteer to raise money to shave their heads. All the money goes toward childhood cancer and is tax deductible! You can visit the site at www.stbaldricks.org
Unfortunately this year we are not hosting an event in my home area however a member of our team is hosting and even live right now in Wichita Falls, Tx. The best part is that her 8 year old son Alex is one of the shavees.
Each year so far we have reached out goals starting in 2007 when Sister Snowflake shaved her head and raised only $40 which was great because she was added as a shavee last minute. Then in 2008, Jennifer, who is holding the event right now in Tx. Flew to Pa to shave her head along side of me (the first and only year I had real hair to do so) and together our Team raised over$1100.
Well right now Alex is preparing to shave his head and could use all our help. He is currently sitting at $270 and his goal is$1000. Do you think you could help? or do you know anyone that might be interested? If so please visit this link and donate or please pass this message on. Even \$1 is a big help to not only Alex but to all the kids this organization helps!
www.stbaldricks.org/participants/sea
rc
h.php?NewSearch=Y&SearchFormID=20090R>307065914&SearchFor=Participant&Sear
ch
EventYear=2009&SearchFirst=alexa
nder&S
earchLast=farrell&SearchForS
havee=1&Se
archForBarber=2&SearchFo
rVolunteer=99&
SearchTeamName=&x=15&y=6#
If you would like to see the pictures with this post please visit my blog.
current weight: 166.0
175
166.25
157.5
148.75
140
F8CONE8 Posts: 18,106
3/14/09 1:19 P
I am really excited about all of the things I have planned for this weekend. It should be fun and productive. My little fingers will finish my DG quilt and then start getting ready for the photo show. Woo Hoo
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
JENNIRENEE Posts: 122
3/7/09 11:33 P
Hi! I haven't been feeling the greatest but have been pushing through. (yucky cancer) But I have been trying to figure out how to recycle all my used cds and I have decided to use scrap booking stuff and pictures of my family to makes hanging wall collages to go with a quote i got to hang on my wall. It says "Laughter sparkles like a splash of water in sunlight". It is suppose to be for your bathroom but I have it in my bedroom so I am taking pictures of my family and friends that are laughing and just funny things. :0) Otherwise making puppets for a fund raiser coming up next week for Childhood cancer Awareness. Hope everyone is doing well. time Flies... I hope you feel better soon!
Keep Smiling!
current weight: 166.0
175
166.25
157.5
148.75
140
TIME_FLIES Posts: 3,620
3/3/09 5:47 P
I've been sick for the last few weeks. I need to get started on a small cross stitch project. I have a bib that I want to cross stitch "Spit Happens" with lots of stains.
current weight: 182.5
194
181.75
169.5
157.25
145
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
3/2/09 7:13 A
How has everyone been doing this past few weeks? What are you working on?
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
2/16/09 7:22 A
Hey Everyone.... It has been awhile since I have posted in our group. What has everyone being doing? How has everyone been doing? Things for me have been crazy busy at work and my personal life has been just as busy.I haven't been working out lately and my weight has shown that fact. It is time for me to get back on the treadmill.... TONIGHT!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
F8CONE8 Posts: 18,106
2/4/09 8:04 P
Hi glad to to hear from you! Working on photos for a show in April and another in May. Having a lot of fun!
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
2/4/09 1:27 P
I just wanted to check in with everyone. How is every doing? Are you still focused on your goals? What is everyone doing in these winter months to stay active?
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
F8CONE8 Posts: 18,106
2/2/09 8:18 P
Oh yes I am. This is like one of those toys you couldn't wait to get and played with as much as possible. I feel like a kid. Trouble is I have 2 quilts that need finishing and I need to do some housework and stuff. Dang the days aren't long enough.
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
2/2/09 7:53 A
That is really cool.... I bet you are enoying it!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
F8CONE8 Posts: 18,106
2/1/09 9:36 P
It does both color and black and white. It has 3 blacks and 6 colors so it is pretty versatile.
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/30/09 2:29 P
Is your printer a color or Black and White printer?
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
F8CONE8 Posts: 18,106
1/29/09 5:29 P
Hi Kelly and all! I was just dropping in to see what everyone is up to. I'm on a quilting kick right now so my photo studio is being a tad neglected. I did print my first 11X17 on my new printer. It is so huge compared to my little 8x12 prints. I am hoping to have at least 5 dynamite prints for our Art Museum show in May. That is a lot when I consider my competition will be semi-pros! Well, if I get to hang one then it will all be worth it. Meanwhile, I am enjoying life.
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/29/09 8:07 A
I hope everyone is safe and sound after this snow storm the country experienced. I know I got a major work out with the snow shovel. We had over 12 inches.... Ugh!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
TIME_FLIES Posts: 3,620
1/24/09 10:30 P
Miserable? Well, only if you think dressing for the weather is the right thing to do. Otherwise, you never know what to dress for.
I can't wait til spring. Then it will at least equal out.
current weight: 182.5
194
181.75
169.5
157.25
145
F8CONE8 Posts: 18,106
1/24/09 1:38 P
That must be miserable. I usually dress in layers because I have to go between buildings so much. My classroom is about 25 yards from my office. I always have a rain jacket with me just in case.
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
TIME_FLIES Posts: 3,620
1/22/09 5:38 P
My room at work gets so HOT! It's terrible because you can't dress for the season. I have to wear something light enough so I don't die of heat stroke, but warm enough that when I'm not there I don't freeze.
I am tired of the snow. today was sunny and just barely warm enough to start melting the snow.
current weight: 182.5
194
181.75
169.5
157.25
145
F8CONE8 Posts: 18,106
1/22/09 3:39 P
I take it one day at a time. We have been having a warm spell but I know it won't last. We really do need to have rain but I do enjoy these semi-sunny days.
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
SONATASMOM SparkPoints: (0)
Fitness Minutes: (195)
Posts: 19
1/21/09 1:34 P
It's cold everywhere - here in GA it was 13 this morning - and no one in GA knows what to do with the cold. But Spring will get here EVENTUALLY - just like EVENTUALLY - we will meet our goals of losing weight - and getting fit. One step at a time!
Teri
Total Goal: 255 pouds
20% - reached 01/15/09 50 pounds
40% -
60% -
80% -
100% -
Pounds lost: 63.8
0
63.75
127.5
191.25
255
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/21/09 12:44 P
Will it ever warm up here in Ohio. I have been so darn cold at work all week. I think I am going to be buying a heater for work because my office is cold cold cold. I have two very large windows in my office and normally I like them.... but not when I realized that my office is colder because of them.
Is Spring ever going to get here. It is like 12 degrees today and not fun at all.
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SONATASMOM SparkPoints: (0)
Fitness Minutes: (195)
Posts: 19
1/19/09 1:30 P
I had a busy, busy weekend - out with friends on Friday night - Saturday night - AND Sunday! Eating out is so difficult and I have to say that from Friday morning until this morning I had a net gain of 0.2 pounds - but I'm actually pretty excited about that - I see that as maintaining - and that's OK.
Teri
Total Goal: 255 pouds
20% - reached 01/15/09 50 pounds
40% -
60% -
80% -
100% -
Pounds lost: 63.8
0
63.75
127.5
191.25
255
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/19/09 7:36 A
It looks like I lost 3.2 pounds this week.
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SHEILARIE1 Posts: 393
1/18/09 9:13 A
Kelly, Keep up th good work.
After my kids popped popcorn for two nights in a row, I finnaly broke down to have some. Problem is, I didn't just have a little bowl... I ate the whole bag. I was so mad at myself. However, I got up early the next morning and worked out, and got right back on the wagon to my weight loss.
We can do this!!!
Sheila
You are special and you deserve to be healthy!
God bless you and yours today!
There are two ways to spread light; to be the candle, or the mirror that reflects it.
Edith Wharton
current weight: 212.0
212
195.25
178.5
161.75
145
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/16/09 9:55 A
What plans does everyone have this weekend?
Tod and I have a busy weekend. We are calling it our parents weekend. On Saturday, We are going to pick up Tod's dad from his Rehab hospital. He has been there since the 7th and is doing great. He is 78 and had Spine Surgery. The doctors said they are amazed at how good he is doing and says it is because he was so fit and healthy before the surgery. Then on Sunday, We are going over to my mother's house to take her shopping for a recliner. My step father died just over a month ago and it was his recliner that she wants to replace.
Plus... I have to get in more exercise. I have to say last night was a battle within myself. I was pretty set on not walking on the treadmill last night. I went into my bedroom to put on my jammies, but ended up putting my workout clothes on. I knew I would be disappointed if I didn't do it.... so I actually walked for 60 mins to prove to myself that can do it even when I don't feel like it!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SOBAIN Posts: 13
1/16/09 9:09 A
Good Job, Kelly!!
It is so hard to make yourself exercise when your body isn't cooperating. But, you did. Sometimes I look for excuses not to exercise and then feel guilty about it later. You did great!! Keep up the good work.
current weight: 140.5
146
139.5
133
126.5
120
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/15/09 8:09 A
I am proud of myself. I actually walked on the treadmill for 40 minutes last night with bad cramps. My husband said I should take the night off... but I said I am not using it as an excuse... because I am the queen of excuses!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/14/09 1:13 P
Thanks for sharing the photos of your soap Susie! It makes me want to try it now!
Tonight will be a challenge for me to work out because right now I have major cramps. I am sitting at work not liking life right now :) Normally, I would use it as an excuse, but I can't keep making excuses anymore. I think I am going pop some midol when I get home and just cook dinner for my husband and get my butt downstairs.
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SOBAIN Posts: 13
1/14/09 8:43 A
Good Morning everybody!!
Thank god for Wednesday--I take Wed. and Friday off, no exercise. Well, no formal exercise. I work in the field and walk around outside most of the day. I have been trying hard to squeeze exercise in. Last night I walked for about an hour. I have to do it as soon as I get home from work or I will eat. Something about the end of the work day that makes me hungry!! Ya'll have a great day.
current weight: 140.5
146
139.5
133
126.5
120
TIME_FLIES Posts: 3,620
1/14/09 6:54 A
I feel the same way! I can't workout after work if I don't go downstairs the moment I get home.
I go right upstairs, change, work-out and then I'm ready to be 'at home.' Otherwise, if I have to wait any amount of time, I have excuses.
I look at it this way, if I would have gone to a gym then I wouldn't have been home until I finished anyway. This way I save time, money and I've exercised.
Good luck! Getting into a routine helps. I know about the feet. I work on my feet all day. I have noticed though, when I'm under 170# they don't hurt as much. And they hurt even less the more I lose. But my feet have always hurt, even when I was in my 20's and 135#.
current weight: 182.5
194
181.75
169.5
157.25
145
LOSINTHEFAT2008 Posts: 2,536
1/14/09 4:39 A
I DIDNT KNOW WHERE TO POST THIS. I UPLOADED SOME PICTURES IN MY PHOTO GALLERLY OF SOAP I HAVE MADE.
SUSIE
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/13/09 8:56 P
It is so hard to get the energy to work out after a long day at work.... but I know I need to do it. Tonight was day two on the treadmill and as I sit here my feet hurt. I realized that I have to keep doing this and I can't stop trying!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/13/09 2:03 P
That is an awesome weight loss Sobain! You should be really proud of yourself!
I know when my computer is down it drives me crazy. I feel like I really need to get online... or I have important things to do!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SOBAIN Posts: 13
1/13/09 8:21 A
My computer had to go to the doctor, so I have been out for a couple of days. I lost 4.5 lbs. last week !!!! It was my first week, and I know my losses won't be that great in the future--but it was a fantastic boost. I have gotten my exercise so far this week. I rode my bike for 3.5 hours Sunday and walked for an hour last night. Good luck to everyone and have a great day!! :)
current weight: 140.5
146
139.5
133
126.5
120
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/12/09 9:25 P
I walked on the treadmill tonight for 30 mins. I was really proud of myself. I felt I could go longer, but I didn't want to over do it on day one back on the treadmill.
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
F8CONE8 Posts: 18,106
1/12/09 11:36 A
Hey CraftC - Good luck today! Is she moving to campus? Sounds like a big step!
Kelly - Now your mission is to lose 1.6 by next week. That is the same as my mission. We can do this! Yes we can, Yes we can. Oh, I forgot myself for a minute. Carol
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/12/09 7:48 A
I am going to keep my head up ... even though I had a gain of .6 pounds this week. I am not going to get discouraged.
Here in Ohio we got about 10 inches of snow. It is crazy here. I am hoping the roads are ok by now because I am heading out to work.
Have a GREAT day everyone!
Kelly
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/7/09 1:01 P
I bet you do have some amazing things Sobain. You whould post some pics in your gallery.... because I know everyone loves to each others work!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SOBAIN Posts: 13
1/7/09 9:47 A
Good Morning everybody. I love to exercise-but outside. I hate that "caged" feeling of being inside, especially on a treadmill. I work in construction so I get plenty of the outdoors. It is also a fantastic place to find "goodies" to work with. I make all kinds of things from construction debris. Ya'll have a great day.
current weight: 140.5
146
139.5
133
126.5
120
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/7/09 8:12 A
That memory quilt sounds really interesting and such great piece offamily history and a keepsake for your family.
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
UNBIJOU1951 SparkPoints: (0)
Fitness Minutes: (5,660)
Posts: 363
1/6/09 10:20 P
Hi all
I made a memory quilt with pictures for my sisters 50th birthday.Starting with pictures of her childhood\to her holding her first grandchild.She loved it.
I love my treadmill,,I like to listen to disco while I walk.
This time next year,I will be glad I didn't QUIT today!
current weight: 197.0
225
208.75
192.5
176.25
160
TIME_FLIES Posts: 3,620
1/6/09 7:57 P
When I do the elliptical I like to listen to upbeat music. TV and slow songs just keep me plodding and I can't go faster no matter what.
I hope you like House. I'm not a fan. Being in healthcare, I find it unrealistic.
current weight: 182.5
194
181.75
169.5
157.25
145
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/6/09 1:03 P
So... My husband got his new TV last night and started to set it up. I am hoping that by Wednesday it will be up and running, so I can walk on the treadmill and watch TV. We bought the TV series of HOUSE, so I am going to watch that while I walk on the treadmill. I have never seen an eposide of it but we here it is good!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/5/09 2:09 P
I was a photography major in college, so I always like to hear what people do with photos. I am actually surprised I have never gotten into scrapbooking.... yet :)
Your shirts and plates sound very cool! You should put some photos on your sparkpage sometime, so we can see what they look like!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
LOSINTHEFAT2008 Posts: 2,536
1/4/09 9:59 P
HI. I MOSTLY PLAY AROUND WITH PICTURES. I LIKE TO TAKE PICTURES AND PUT THEM ON SHIRTS. I ALSO LIKE TO TAKE MY PICTURES AND PUT THEM ON A CLEAR GLASS PLATE. I HAVE MADE MY WHOLE FAMILY A GROUP PICTURE OF US ALL ON A PLATE AND GAVE TO THEM ALL ONE. IT WAS MY LAST GROUP PICTURE TOGETHER BEFORE MY DAD PASSED AWAY. WOULD LIKE TO DO OTHER STUFF ALSO. GLAD I FOUND YOU ALL.
SUSIE
TIME_FLIES Posts: 3,620
1/4/09 9:42 P
Hi Susie! It's good to see you here. What kind of crafty things do you like to do?
current weight: 182.5
194
181.75
169.5
157.25
145
LOSINTHEFAT2008 Posts: 2,536
1/4/09 9:16 P
HI EVERYONE I JUST FOUND THIS TEAM. LOOKING FORWARD TO GETTING TO KNOW YOU ALL.
SUSIE
TIME_FLIES Posts: 3,620
1/4/09 8:11 P
You have to make up something better than tripping through your front door! It does not make for an interesting tale.
Today was a good day for me. I did day one of the Boot Camp, sewed a bit which I haven't done for eons, and put things up on Etsy.
I need to clean my craft room. It's been a work in progress for 2 years. So most of the stuff I put up on Etsy was de-stashing my supplies.
I really feel like I've accomplished a lot today. And I hope the Boot camp momentum drives me straight on to the 31st!
current weight: 182.5
194
181.75
169.5
157.25
145
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/4/09 6:22 P
I was walking into my hose this morning from letting the dog in. I was going up the 1 step to my front porch and I tripped and flew into the door handle and smashed my face into the door handle.... and I tried to help break my fall, so I put out my hand, which only caused me to her my wrist.
I think I am going to have a great story to tell all my customers as to why I am black and blue under my eye and my wrist is swollen.
What a GREAT weekend :)
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
1/2/09 5:19 P
Does anyone have weekend plans?
I am going to take my mother to see Marley and Me tomorrow which should be cute because she loves all movies with dogs in them.
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
TIME_FLIES Posts: 3,620
1/1/09 8:05 P
We didn't do anything, either. But then again, we never do. My family never made a big deal out of New Years day except that we get it off from work or got paid overtime.
current weight: 182.5
194
181.75
169.5
157.25
145
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
12/31/08 10:19 A
I am so glad 2009 is almost here! Does anyone have any holiday plans for today or tomorrow?
My husband Tod and I will actually be staying in this evening... which is a good thing because we got hit with snow today.(We live in NE Ohio) I think tomorrow we do not have any set plans, but probably cleaning the house and organizing bills. I know it sounds exciting, so I hope you all don't get jealous!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
12/30/08 4:28 P
That is great you got your area cleaned up. I am not looking forward to it myself. I have some many things I need to organize. My goal is to be done within 2 weeks because we are having people over to our house that have never ben here before. I know everyone will want to see my studio.... UGH!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
TIME_FLIES Posts: 3,620
12/28/08 11:14 A
It did get cleaned, sort of (we have open space in the middle of the room . . . clean enough) and then the girl made her Little Red Riding Wolf and well, it looks like the Woodcutter got started on the wolf with all the fur on the floor.
(The bottom Signature Link will show you the Wolf that she made.)
Edited by: TIME_FLIES at: 12/28/2008 (11:15)
current weight: 182.5
194
181.75
169.5
157.25
145
CINNIEMAY Posts: 2,753
12/27/08 10:01 P
I know what you mean about cleaning the craft room! With all that I made for Christmas I didn't have time to clean as I went so I need to work on it now! That is my plan for next week!
current weight: 250.2
253.2
227.4
201.6
175.8
150
TIME_FLIES Posts: 3,620
12/26/08 10:46 P
Just stopped by to say hello and see what was up! I just joined. I'm not working on any project at the moment, but I have quite a few waiting for me.
My first step, though is to clean up my play (craft) room. Right now its such a mess with both my daughter and I doing this, that and the other. Of course, she is messier than I am.
current weight: 182.5
194
181.75
169.5
157.25
145
CINNIEMAY Posts: 2,753
12/24/08 11:43 P
Welcome! I am glad you joined us!
current weight: 250.2
253.2
227.4
201.6
175.8
150
BELINCKY SparkPoints: (0)
Fitness Minutes: (440)
Posts: 2
12/24/08 11:52 A
Hello Everyone!
I am brand new to SparkPeople and to Kelly's A & C team page! Very excited to be part of the team, and wanted to wish everyone a very Happy Holiday!
current weight: 207.0
208
193.5
179
164.5
150
CINNIEMAY Posts: 2,753
12/23/08 7:45 P
I guess everyon eis working on their Christmas projects! Have a great Christams!
current weight: 250.2
253.2
227.4
201.6
175.8
150
KUNGFOOD Posts: 2,953
12/17/08 7:11 P
Hi everyone. Finishing up Christmas card mailing this week, and looking forward to the holidays!
The rule is, jam tomorrow and jam yesterday - but never jam today.
--Lewis Carroll
Pounds lost: 25.0
0
10
20
30
40
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
12/16/08 7:49 P
Hey Everyone....
I am sorry for being MIA for awhile. My families life has been crazy since July when my step-father was told he had pancreatic cancer. He lost his battle to cancer last week, since broke our hearts. We are just trying to get back to our normal lives and that means getting back on track with losing weight and to get this group active again.
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
MYCHEER SparkPoints: (0)
Fitness Minutes: (24,311)
Posts: 8,985
12/6/08 9:15 A
I love to share pages that I find for others to read. ARTHURTOM is one that I think would inspire you. Read his page and look at his pics. I nominated him for motivator because he really motivates me. He's the last one to comment on my page. You can go and click on him there to find him faster.
susan
They that wait upon the Lord shall renew their strength, they shall mount up with wings as eagles.
Isaiah 40:31
Pounds lost: 16.0
0
11.25
22.5
33.75
45
NOTSOTHIN08 SparkPoints: (0)
Fitness Minutes: (2,504)
Posts: 2,380
12/6/08 8:48 A
Just wanted to say Good Morning to everyone...Have a great weekend!
current weight: 151.0
151
141
131
121
111
MYCHEER SparkPoints: (0)
Fitness Minutes: (24,311)
Posts: 8,985
12/1/08 6:25 P
Hi everyone. Hope your Thanksgiving was a blessing for all of you.
susan
They that wait upon the Lord shall renew their strength, they shall mount up with wings as eagles.
Isaiah 40:31
Pounds lost: 16.0
0
11.25
22.5
33.75
45
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
11/30/08 2:33 P
Hey Everyone... How was your Thanksgiving weekend. WE had a very nice time with our family. My husband and I added a new member of the family. We got a kitten. It is the cutest little thing. We needed it Smokey!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
F8CONE8 Posts: 18,106
11/27/08 1:57 P
Happy Thanksgiving Everyone!
teams.sparkpeople.com/photobugs
teams.sparkpeople.com/artists
teams
Hugsfromoregon.com
Pounds lost: 9.0
0
9.75
19.5
29.25
39
MYCHEER SparkPoints: (0)
Fitness Minutes: (24,311)
Posts: 8,985
11/25/08 8:48 A
Very quiet here.
They that wait upon the Lord shall renew their strength, they shall mount up with wings as eagles.
Isaiah 40:31
Pounds lost: 16.0
0
11.25
22.5
33.75
45
MYCHEER SparkPoints: (0)
Fitness Minutes: (24,311)
Posts: 8,985
11/24/08 8:28 A
Friday I had my class make a craft for their Thanksgiving table. We took construction paper and made it round with a piece of paper on the bottom. So it was like a bowl. Then took some pretty wide ribbon with leaves on it and put it around the round part. Filled it with potpourri and a prety gold leaf. The parents loved them when they came in.
Have a blessed day.
susan
They that wait upon the Lord shall renew their strength, they shall mount up with wings as eagles.
Isaiah 40:31
Pounds lost: 16.0
0
11.25
22.5
33.75
45
MYCHEER SparkPoints: (0)
Fitness Minutes: (24,311)
Posts: 8,985
11/22/08 2:18 P
Hi everyone. Just got back from delivering Thanksgiving baskets to those who applied through our Loaves and Fishes. My DH and I took 31 baskets. Should have taken more. THey had at least 700 to deliver but I also heard there was 1200. Any way good workout. Up and down stairs.
Have a great day.
susan
They that wait upon the Lord shall renew their strength, they shall mount up with wings as eagles.
Isaiah 40:31
Pounds lost: 16.0
0
11.25
22.5
33.75
45
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
11/21/08 4:26 P
What is everyone doing this coming weekend?
I am going to try and get some Christmas shopping done if the snow will give us a bit of a break here.
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
MYCHEER SparkPoints: (0)
Fitness Minutes: (24,311)
Posts: 8,985
11/21/08 8:27 A
I haven't seen snow yet. But it has snowed in Colorado.
Last night we went to see the movie Fire Proof with Kirk Cameron. It is a christian movie about a couple's marriage almost to end. Great movie.
It is cold here today. I only work a half day today and then will come home and clean house before the weekend hits.
susan
They that wait upon the Lord shall renew their strength, they shall mount up with wings as eagles.
Isaiah 40:31
Pounds lost: 16.0
0
11.25
22.5
33.75
45
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
11/20/08 1:35 P
We had our first snowfall a few days ago... I woke up and was surprised to see the white stuff on the ground. Hopefully everyone is safe and warm this winter season!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
MYCHEER SparkPoints: (0)
Fitness Minutes: (24,311)
Posts: 8,985
11/18/08 8:04 A
I think any time we are up and moving is a workout. Yesterday I got up and walked with Leslie Sansone and today I got up at 5 and got on my elliptical. Now to see if I can do this all week. I love to do it first thing and get it over with.
susan
They that wait upon the Lord shall renew their strength, they shall mount up with wings as eagles.
Isaiah 40:31
Pounds lost: 16.0
0
11.25
22.5
33.75
45
COUNTRYGAPEACH Posts: 729
11/18/08 3:39 A
Hello team! I am doing much better, or let me say I did much better on Monday. I actually got things accomplished and drank my water. Still I didnt actually exercise, I had to pick up pecans, I think that is execise! I was on the ground. No pecan picker for me, about 2 hours straight. Isn't that exercise?? Anyway, just checkin in with ya'll???
MYCHEER SparkPoints: (0)
Fitness Minutes: (24,311)
Posts: 8,985
11/16/08 9:42 A
I didn't workout either yesterday although I did clean so hopefully that helped.
Have a great and blessed day today.
susan
They that wait upon the Lord shall renew their strength, they shall mount up with wings as eagles.
Isaiah 40:31
Pounds lost: 16.0
0
11.25
22.5
33.75
45
COUNTRYGAPEACH Posts: 729
11/15/08 8:40 P
Hey everyone!
Hope the weekend is going well? Here it is soooo rainy~ and turning cold now~ugh!~ Anyways, snuggled up with my blankie and coffee, watching the hubby holler at the tv(he is watching football) Just thought I would say hey~ Today I didn't do anything I should,NO EXERCISE~NO WATER, I WAS BAD,~ But the monthly is sneaking up on me, and I don't feel good! Anyways, holler at ya'll later! Welcome Crissy!
CHRISSY-2706 Posts: 21
11/15/08 2:30 P
Thank you so much for welcoming me to the team.
I've had a bit of a rough week. I decided to try a new working on Tuesday and have been barely able to move the past few days... Today I feel a little stiff but thats about it. I am hopeing to get back into the swing of things on monday. I miss my workouts. I have 2 more work days then I have days off to work on my christmas present ideas and get the house cleaned up. I can't wait. I'm rather looking forward to it!
I hope you all have a great weekend talk to you soon. Hugs.
~Chrissy
Pounds lost: 41.4
0
39.25
78.5
117.75
157
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
11/14/08 3:31 P
Is everyone ready for the weekend????
I am so ready for the work day to be over aleast because I am going over to my friend's house for a game night. I think that will be so much fun!
This weekend we will probably be raking up leaves like crazy!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
MYCHEER SparkPoints: (0)
Fitness Minutes: (24,311)
Posts: 8,985
11/12/08 11:40 P
Good evening everyone. Welcome Chrissy. I just got back from church. I'm very tired tonight. Need to get to bed so I can get up at 5 and hop on the elliptical.
susan
They that wait upon the Lord shall renew their strength, they shall mount up with wings as eagles.
Isaiah 40:31
Pounds lost: 16.0
0
11.25
22.5
33.75
45
IRISOVER50 Posts: 8
11/12/08 6:38 P
Chrissy
current weight: 210.0
215
202.5
190
177.5
165
SUNGLORRY SparkPoints: (0)
Fitness Minutes: (11,411)
Posts: 5,104
11/12/08 12:06 P
We are glad you joined the group Chrissy!!
Great Job on the weight lose Susan!!
Kelly
5% Goal is 16
10% Goal is 34
15% Goal is 50
20% Goal is 67
25% Goal is 84
30% Goal is 100
35% Goal is 117
40.3% Target goal is 135
current weight: 221.6
321
287
253
219
185
IRISOVER50 Posts: 8
11/12/08 11:50 A
susan!
current weight: 210.0
215
202.5
190
177.5
165
Page: 1 of (8) 1 2 Next Page › Last Page »
## Other Art & Craft Ideas General Team Discussion Forum Posts
Topics:
Last Post:
6/10/2018 3:15:00 PM
3/20/2019 1:19:51 AM
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.